<?xml version="1.0" encoding="UTF-8" ?>
<rss version="2.0">
<channel>
<title>MongoDB | Blog</title>
<description>MongoDB news and information.</description>
<link>https://www.mongodb.com/blog</link><item>
      <title>Introducing MongoDB Agent Skills and Plugins for Coding Agents</title>
      <description><![CDATA[<p>Software engineering is evolving into agentic engineering. According to the Stack Overflow Developer Survey 2025, 84% of respondents use or plan to use AI tools in their development, up from 76% the previous year. At this rate, the tooling needs to keep pace.</p>
<p>Last year, we introduced the MongoDB MCP Server to give agents the connectivity they need to interact with MongoDB, helping them generate context-aware code. But connectivity was only the start. Agents are generalists by design, and they don't inherently know the best practices and design patterns that real-world production systems demand.</p>
<p>Today, we're addressing this by introducing official MongoDB Agent Skills: structured instructions, best practices, and resources that agents can discover and apply to generate more reliable code across the full development lifecycle, from schema design and performance optimization to implementing advanced capabilities like AI retrieval.</p>
<p>To bring this directly into the tools you use, we're also launching plugins for Claude Code, Cursor, Gemini CLI, and VS Code, combining the MongoDB MCP Server and Agent Skills in a single, ready-to-use package.</p>
<p>Turning coding agents into MongoDB experts
Coding agents are great at producing working code, but they still make common mistakes in production systems, often defaulting to relational thinking that doesn't translate well to MongoDB, such as:</p>
<p>Over-normalizing schemas, ignoring MongoDB's document-oriented strengths.</p>
<p>Underusing compound indexes, causing performance bottlenecks at scale.</p>
<p>Misusing indexes and search indexes, overlooking the consistency trade-off for high-performance full-text search.</p>
<p>Because these pitfalls mirror common human errors, they are naturally reflected in agent outputs. MongoDB Agent Skills address this by providing expert guidance to agents, like schema design heuristics, indexing strategies, query patterns, and operational safeguards, enabling agents to ship more reliable, more consistent code faster.</p>
<p>Agent Skills were introduced by Anthropic as an open standard and have since been adopted by the leading AI development tools, including Claude Code, Cursor, Codex, and more.</p>
<p>This initial release covers the full application development lifecycle on MongoDB, from connection management and schema design to guidance on implementing advanced capabilities. We will continue to update and expand our skills library based on user needs.</p>
<p>Figure 1. MongoDB Agent Skills.</p>
<p>Scaling agentic engineering with MongoDB
As organizations embrace agentic software engineering, existing processes and workflows must be reimagined. The MongoDB MCP Server and MongoDB Agent Skills are built for this shift and work best together, giving builders and agents the tools to move fast without sacrificing guardrails or control.</p>
<p>The MongoDB MCP Server serves as the connectivity layer for your MongoDB deployments. It manages authentication and defines exactly what agents can access and do. Combined with MongoDB’s native authorization, it ensures agents operate with only the permissions they need, while giving teams governance through configurable controls like disabling specific tools.</p>
<p>Agent Skills ensure agents follow best practices from the start, reducing architectural risk, accelerating implementation, and raising the baseline quality of every agent-generated code.</p>
<p>While some skills can be used independently, others work in conjunction with the MongoDB MCP Server for workflows that require it. To simplify setup, the MCP Server and skills are now packaged together as plugins and extensions for Claude Code, Cursor, Gemini CLI, and VS Code, bringing these capabilities directly into your preferred tools.</p>
<p>Figure 2. MongoDB for Claude plugin in action.</p>
<p>We also encourage you to build your own skills as your agentic workflows mature. Whether enforcing internal naming conventions, custom data modeling patterns, or team-specific workflows, skills give you a practical way to codify institutional knowledge and ensure every agent and every developer works from the same playbook.</p>
<p>How to get started
Whether you’re using Claude Code, Cursor, Gemini CLI, or other AI development tools, you can install the MongoDB MCP Server and Agent Skills in seconds.</p>
<p>For example, in Claude Code, install the plugin that bundles both:</p>
<p>Code Snippet
/plugin marketplace add mongodb/agent-skills
/plugin install mongodb@mongodb-plugins</p>
<p>For Cursor, Gemini CLI, and VS Code extensions, refer to their respective documentation.</p>
<p>You can also install the skills for most coding agents using the Vercel Skills CLI (requires Node.js):</p>
<p>Code Snippet
npx skills add mongodb/agent-skills</p>
<p>If you prefer, you can manually clone the GitHub repository and copy the skills into the appropriate folder for your agent.</p>
<p>Similarly, to install the MongoDB MCP Server, use the following command:</p>
<p>Code Snippet
npx mongodb-mcp-server@latest setup</p>
<p>Agentic engineering is changing how teams work, and it is changing fast. Agents need the context and guidance to meet the standards of real-world production applications. With the official MongoDB Agent Skills and plugins, builders can move faster with confidence, and organizations can adopt coding agents knowing that MongoDB best practices are embedded directly into every workflow.</p>
<p>Next Steps
Ship faster, more reliable apps on MongoDB with Agent Skills. Install for Claude Code, Cursor, Gemini CLI and VS Code!</p>
]]></description>
      <pubDate>Tue, 31 Mar 2026 13:00:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/product-release-announcements/introducing-mongodb-agent-skills</link>
      <guid>https://www.mongodb.com/company/blog/product-release-announcements/introducing-mongodb-agent-skills</guid>
    </item><item>
      <title>Enhance Your In-IDE Data Browsing Experience With MongoDB</title>
      <description><![CDATA[<p>MongoDB is excited to announce the general availability of our enhanced data browsing experience in the MongoDB for Visual Studio (VS) Code extension. This new experience offers a unified workspace for developers to visually browse, query, and edit their data natively, streamlining workflows so they can manage their database right where they write their code.</p>
<p>Evolving the developer workflow
The modern developer’s workflow is incredibly fast-paced. With developers juggling an average of 14 different tools daily, the cognitive load of constantly jumping between applications can easily disrupt focus. When your application needs to evolve, working with your data shouldn’t force a break in your flow state.</p>
<p>As the MongoDB for VS Code extension has grown to nearly 3 million downloads, we’ve seen firsthand how developers are pushing the boundaries of what an in-IDE (integrated development environment) database tool can do. While developers love accessing their data directly in the editor, we wanted to transform this experience to be even more visual, actionable, and seamless. Instead of switching to external terminals for quick tasks or taking the time to translate familiar MongoDB Shell commands into Extended JSON (EJSON), we are bringing a full-fledged, intuitive data management suite right to your VS Code sidebar.</p>
<p>Exploring what’s new in the MongoDB for VS Code extension
Here are the key improvements that transform the extension into a complete workflow solution:</p>
<ol>
<li>Paginated tree view and prescriptive titles
Understanding complex data models at a glance is crucial for rapid development. We are transforming the document browsing experience by automatically detecting human-readable fields (like names or emails) to create prescriptive document titles, rather than just displaying standard _id hashes. Furthermore, you can now use a structured, paginated tree view to instantly browse collection data from the “Documents” tab, as well as interactively explore playground results when you run a script. This means you get the full context of your collections visually and instantly.</li>
</ol>
<p>Figure 1. Paginated tree view and prescriptive titles</p>
<ol start="2">
<li>Powerful action menus and header controls
Navigating your data should be inherently actionable. To give you full management capabilities without the need for you to write manual queries, we’ve added a new action header directly inside the tree view. This header equips you with buttons to instantly insert documents, refresh (to rerun the current query or playground script), sort ascending/descending by _id, paginate through results, and even bulk delete to empty a collection.</li>
</ol>
<p>Additionally, managing individual records is easier than ever. Simply hover over any document within the tree view to reveal a contextual action menu that allows you to instantly delete, copy, clone, and edit the document natively.</p>
<p>Figure 2. Native action menus</p>
<ol start="3">
<li>Native editing and shell syntax default
We wanted to make interacting with your database as natural as possible.</li>
</ol>
<p>To remove the friction of translating your commands, we’ve added a setting that defaults to standard Shell syntax over EJSON for all insert, clone, edit, and clipboard functionalities. This guarantees that any document you copy or any quick fix you make in the extension is instantly compatible with your application code.</p>
<p>Figure 3. Clone action.</p>
<p>Stop context switching and start building
Your database tools should adapt to your workflow, not disrupt it. By bringing native data editing, intelligent tree views, and standard Shell syntax directly into your sidebar, we’re bridging the gap between writing code and managing data. You no longer have to sacrifice your flow state just to make a quick database fix, verify a playground result, or translate verbose EJSON formats. This overhaul is another step in our commitment to making this MongoDB extension your ultimate command center—empowering you to spend less time wrestling with external tools and more time actually building your application.</p>
]]></description>
      <pubDate>Tue, 17 Mar 2026 17:00:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/technical/enhance-your-in-ide-data-browsing-experience-with-mongodb</link>
      <guid>https://www.mongodb.com/company/blog/technical/enhance-your-in-ide-data-browsing-experience-with-mongodb</guid>
    </item><item>
      <title>Observability and OpenTelemetry: Introducing MongoDB Atlas Log Integration</title>
      <description><![CDATA[<p>In high-stakes enterprise environments, outages do not wait for business hours, and neither do IT/Network Operators.</p>
<p>A latency spike hits the dashboard, and metrics signal that the database is under pressure. The cause? Indeterminate. Meanwhile, the business impact is immediate: orders fail to process, customers can’t access accounts, transactions stall, and critical records become temporarily unavailable. Every minute of uncertainty translates into lost revenue, frustrated users, and escalating pressure.</p>
<p>Teams often fall back on a familiar—yet time-consuming—ritual: logging into their data platform, exporting large log files, extracting compressed archives, and manually searching through thousands of lines of entries to identify the issue. What should be a quick diagnosis becomes a manual context-switching investigation. By the time the problematic query, configuration issue, or audit event is identified, users have already experienced the disruption—and the business has absorbed the cost.</p>
<p>MongoDB believes the database should be the heartbeat of a digital business. So we’re introducing a new log integration that brings MongoDB Atlas system and audit logs directly into external observability and storage platforms. This enhancement helps bridge the gap between metrics and meaning when it matters most.</p>
<p>Flexible log delivery for modern observability workflows
Now database operators, DevOps pros, and IT Operations teams alike can send MongoDB system and audit logs—including mongod, mongos, and audit logs—directly to the tools they already rely on: Datadog, Splunk, Google Cloud Storage, Azure Blob Storage, or Amazon S3.</p>
<p>Beyond native integrations, MongoDB supports sending logs via OpenTelemetry (OTel), the open-source standard for collecting and transmitting telemetry data. This enables customers to export MongoDB logs to any observability or logging backend that supports OTel. By using a vendor-neutral, standards-based protocol, MongoDB fits seamlessly into modern observability architectures. This eliminates lock-in and preserves flexibility as tooling strategies evolve.</p>
<p>Enabling real-time clarity
Modern enterprises generate rich system logs essential for debugging and compliance. However, when these logs are siloed, operational inefficiencies grow. Manual log access introduces friction, delays resolution, and creates a visibility gap between metrics and logs.</p>
<p>MongoDB’s new log integration transforms that experience with:</p>
<p>Accelerated troubleshooting: Send logs in near real-time to observability platforms like Datadog, Splunk, or OpenTelemetry-compatible backends, enabling teams to quickly identify issues and reduce manual operational steps that slow incident resolution.</p>
<p>Unified telemetry: Correlate MongoDB logs with application traces and infrastructure metrics in existing observability platforms, helping teams quickly understand how database behavior impacts overall system performance.</p>
<p>Simplified compliance: Automatically route audit logs to secure long-term storage such as Amazon S3, helping organizations meet regulatory and audit requirements without manual log management.</p>
<p>Figure 1. Atlas Log Integration configuration options for delivering MongoDB logs to observability and storage platforms.</p>
<p>image</p>
<p>Real-world use cases
How does this look in practice for modern application, operations, and engineering teams? Here are a few examples.</p>
<p>table</p>
<p>The criticality of observability
As applications scale, the database becomes the most critical layer of an organization’s technology stack. Missing or siloed visibility leads to costly downtime and fragmented decision-making.</p>
<p>This log integration is available for dedicated M10+ clusters. An external sink can be configured in minutes:</p>
<p>Navigate to the Project Integrations page in the MongoDB Atlas UI.</p>
<p>Select the intended destination: Datadog, Splunk, Google Cloud, Microsoft Azure, Amazon S3, or any OTel log endpoint.</p>
<p>Enter the required credentials and select the desired logs to send: mongod, mongos, or audit.</p>
<p>Note: Atlas Search logs are also currently available via private preview.</p>
<p>Figure 2. MongoDB Atlas logs integrated into an OpenTelemetry observability pipeline.</p>
<p>image</p>
<p>One observability strategy, built to scale
For teams that need fast, MongoDB-centric visibility, MongoDB Atlas continues to offer powerful native tools like Query Insights and the Query Profiler. These capabilities are designed to surface what is happening inside a user’s clusters with minimal friction.</p>
<p>However, as organizations scale, database insights can not live in isolation. MongoDB Atlas’s log integration extends observability systematically to the data plane. This enables MongoDB logs to flow into the observability platforms teams already use across engineering, security, IT operations, and compliance. With native integrations and an OpenTelemetry-compatible endpoint, teams can route logs wherever they are needed. This enables rapid troubleshooting, stronger auditability, and confident scaling without blind spots.</p>
]]></description>
      <pubDate>Thu, 12 Mar 2026 14:00:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/product-release-announcements/introducing-mongodb-atlas-log-integration</link>
      <guid>https://www.mongodb.com/company/blog/product-release-announcements/introducing-mongodb-atlas-log-integration</guid>
    </item><item>
      <title>Inside MongoDB Dublin: The Heart of Our International Growth</title>
      <description><![CDATA[<p>Nestled between the Irish Sea and the Wicklow Mountains, MongoDB’s Dublin office brings together people from around the world. It’s a place where you can build a meaningful career, contribute to leading global products, and feel part of a close-knit community. Located in Ballsbridge just south of Dublin city center, the office is a short walk from the Lansdowne DART station and is well-served by multiple bus routes, making it easy to plug into everything the city has to offer.</p>
<p>Image of a wall in the MongoDB Dublin office that is painted with Dublin relevant illustrations and text that says &quot;Build together&quot; and &quot;Make it matter&quot;
As MongoDB’s international headquarters, Dublin is a key hub where over 300 employees from more than 40 nationalities own critical parts of the company’s products and support customers running mission-critical systems across the globe. Established in 2012, MongoDB Dublin has long played a pivotal role in helping the company achieve its mission of empowering innovators to create, transform, and disrupt industries by unleashing the power of software and data.</p>
<p>In this spotlight, you’ll hear from people across MongoDB’s Product &amp; Technology, Sales, and Technical Services teams about what it’s like to build your career—and your life—in Dublin with MongoDB.</p>
<p>Image of CEO, CJ Desai, speaking in front of a group of employees in the Dublin office.
CEO CJ Desai holds an “Ask Me Anything” session in a recent visit to Dublin.
Life at MongoDB Dublin
The MongoDB Dublin office represents both the company’s global energy and the warmth of Irish culture. Ciara, workplace manager for the office, notes:</p>
<p>There’s always something happening here, from fun in-office activities and lively trivia nights to vibrant town halls that bring people together. We run regular volunteer days that give everyone the chance to give back, along with coffee mornings, team socials, and plenty more that keep the atmosphere welcoming and connected.</p>
<p>There’s a real sense that MongoDB Dublin captures true Irish charm and character, friendly, open, and full of personality. That mix creates a workplace that feels both globally diverse and locally grounded. You get the best of both worlds: international perspectives combined with that uniquely Irish sense of humour and hospitality.</p>
<p>MongoDB Dublin isn’t just a workplace—it’s a team that wins and grows together!</p>
<p>Here’s what stands out about life and work here:</p>
<p>Wellness comes first: From private health, dental, and vision care, to Gympass discounts and Headspace subscriptions, we support your physical and mental well-being.</p>
<p>Family and flexibility: Eligible employees enjoy 20 weeks of paid parental leave, fertility and family planning support through Carrot, and Cleo resources for parenting. Our flexible work model means you never have to choose between career and family.</p>
<p>Financial security: We offer life insurance, income protection, and the chance to grow wealth through our employee stock purchase plan.</p>
<p>Time for you: Every employee enjoys 27 days of annual leave, plus extra wellness initiatives like MongoDB Bloom (our internal wellness program), midday meditation sessions, and fun in-office perks.</p>
<p>Growth and development: You’ll have access to dedicated learning programs, technical training and leadership development to upskill in programming, leadership, communication, and more.</p>
<p>Inclusion and community: With over 40 nationalities represented in our Dublin office, our teams reflect the global customers and communities we serve. From active Employee Resource Groups (ERGs) like MDBWomen, Queer Collective, and Config to a warm, collaborative office culture, MongoDB Dublin is a place where you’ll feel supported—whether you’re relocating to Ireland or continuing to grow your career here.</p>
<p>2 separate images of colleagues from MongoDB smiling together during volunteer opportunities.
Local volunteer opportunities, team outings, and other events to bring colleagues together
Hear from the Dublin team
Donal, product &amp; technology leader for EMEA, describes how MongoDB’s platform and culture drew him in:</p>
<p>Joining MongoDB felt like the right move at the right time. MongoDB is an amazing platform, with capabilities ranging from deploy anywhere, support for unstructured and semi-structured data, native vector search for embeddings, and flexible workload handling. The problems we’re tackling around data, scale, and flexibility are real, hard problems and they matter to how modern applications are built.</p>
<p>What’s really impressed me since joining is the engineering culture. People are curious, thoughtful, and open to learning, bringing strong ownership and intellectual honesty to their work. Building out our engineering teams in Ireland is exciting because we’re creating groups that will own critical parts of the product, not just contributing at the edges.</p>
<p>It’s a great moment to join MongoDB in Ireland. The company is growing, the Dublin office is becoming a core engineering hub, and engineers here have the chance to make a real impact. If you’re looking for challenging work, smart teammates, and the opportunity to help shape what we’re building, this is a really exciting place to be.</p>
<p>2 separate images of multiple employees of MongoDB attending events in the Dublin office.
You’ll find the office frequently buzzing with events.
For Max, an engineer working on query optimization, joining MongoDB Dublin has been both a professional challenge and a personal milestone:</p>
<p>MongoDB was the first database I ever used when learning how to build backend applications, so getting the opportunity to intern here and now work here full-time feels like a real full-circle moment. Today, I work on query optimization, a core part of the database engine focused on making queries as fast as possible. It brings a lot of interesting theoretical computer science problems. We get to dive into research papers and actually implement those ideas in practice, which means I’m learning something new every day.</p>
<p>The Dublin team is incredibly international, with talented engineers from all over the world. I genuinely feel lucky to work alongside people with such deep experience building databases. Especially at the start of my career, being surrounded by colleagues I look up to makes a huge difference. We’re often tackling hard problems that haven’t been solved before in the context of document databases, so having that level of expertise around you is invaluable.</p>
<p>It’s been almost a year since I moved to Dublin, and it’s flown by. Relocating to a new country is always a bit daunting, but Ireland is such a welcoming place that it already feels like a second home. Dublin doesn’t always get the credit it deserves, but being so close to both the sea and mountains is something I really value. What more could you ask for?</p>
<p>An image of 3 employees working and taking meeting calls from within small private phone booths.
Need a space for focus time? Grab a phone booth!
On the sales side, Dublin is at the crossroads of cloud, AI, and data-driven innovation, making it an exciting place to build a long-term career in tech sales.</p>
<p>Account executive Stephane shares how MongoDB’s culture and the Dublin market shape the experience:</p>
<p>Selling at MongoDB stands out because of the sales culture and investment in people. You’re constantly challenged to adapt, stay flexible (much like the document model), and raise the bar, both individually and as a team. There’s a strong entrepreneurial mindset here. If you’re curious, proactive, and eager to learn, you’re given real ownership and the support to run with your ideas, rather than being tied to a one-size-fits-all script. It’s why Think Big, Go Far genuinely shows up in how we work.</p>
<p>Dublin sits at the centre of cloud, AI, and data-driven innovation, making it a great place to build a long-term career. Here, you’re not just selling software - you’re helping customers modernise mission-critical systems and move from experimentation to real production impact. I joined MongoDB because I believed in the impact of cloud and digital transformation, and that opportunity has only grown. Now, with AI driving the next wave of innovation, MongoDB is well-positioned to play a meaningful role, and it’s exciting to be part of that journey.</p>
<p>What really sums up sales at MongoDB is the opportunity to grow. There’s a clear path forward, whether that’s moving from sales development representative to account executive, stepping into leadership, or exploring new roles across the business. If you’re motivated, open to learning, and serious about building a long-term sales career, MongoDB gives you the environment and support to do it.</p>
<p>The image on the left shows employees taking a break by playing foosball and billiards together. The image on the right shows employees gathered at a table with open laptops and a larger screen with a presentation.
Silvia, an account development representative, recalls how the culture came through even before she joined—and how it’s shown up since:</p>
<p>From the very first interviews, I was struck by how clearly MongoDB’s values came through. Every conversation felt supportive and intentional. What stood out most was the genuine investment in me as a candidate, making sure I had what I needed to succeed before I’d even contributed anything back.</p>
<p>Once you join, that support only grows. It’s a high-performance environment and the work is challenging, but you’re never doing it alone. Learning is constant, collaboration is open, and from week one, you feel like an active part of the team, without rigid hierarchies holding you back.</p>
<p>Walking into the Dublin office for the first time, I immediately felt the sense of community. In my first month, I met both our CIO and CEO in person. Hearing them speak about the company’s ambition made it clear this is something bigger, and that the work we do here truly matters.</p>
<p>An image of two employees smiling with #LifeAtMongoDB photo props.
The annual summer party is always good craic.
The technical services team in Dublin plays a critical role in helping customers succeed on MongoDB. For Fernanda, a manager on the team, that journey has been about impact, community, and the confidence to grow her career in a new country:</p>
<p>I joined MongoDB in 2018, when the Cloud support team had just four people across all of EMEA. From the start, there was a strong culture of ‘we’re here to help each other, you can count on me.’ It made us much more than colleagues, and I had never experienced such a supportive environment before.</p>
<p>What’s kept me here is that this culture never faded as we scaled. Even on cold, rainy Dublin mornings, there’s real warmth in the office, smart conversations, and a real sense that you’re not facing challenges alone. That combination of impact, continuous learning, and a true ‘team first’ mindset is what’s made me stay and grow my career here.</p>
<p>Today, Dublin is larger, more diverse, and deeply connected to global initiatives, but the essence remains the same. You feel welcomed, people are approachable, and there’s a real Build Together mindset across teams.</p>
<p>Ready to build your career in Dublin?
Whether you’re building core database features or partnering with customers on their most strategic initiatives, at MongoDB Dublin, you’ll be surrounded by people who want to see you succeed and by teams that are shaping the future of software.</p>
]]></description>
      <pubDate>Fri, 27 Feb 2026 16:30:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/culture/inside-mongodb-dublin-the-heart-of-our-international-growth</link>
      <guid>https://www.mongodb.com/company/blog/culture/inside-mongodb-dublin-the-heart-of-our-international-growth</guid>
    </item><item>
      <title>Towards Model-based Verification of a Key-Value Storage Engine</title>
      <description><![CDATA[<p>In our previous post, we talked about our process of specifying MongoDB’s distributed transactions protocol and how it enabled novel analysis of its performance characteristics. In this follow-up, we talk about how the modularity of our specification also enabled us to check that the underlying storage engine implementation actually conforms to the abstract behavior defined in our formal specification. That is, we are able to formalize the interface boundary between the sharded transaction protocol and WiredTiger, the underlying key-value storage engine, and develop an automated way to generate tests for checking conformance between the semantics of the underlying storage engine layer and this abstract model.</p>
<p>As mentioned in the previous post, a deeper exploration of the concepts covered in this post is covered in our recently published VLDB ’25 paper, Design and Modular Verification of Distributed Transactions in MongoDB.</p>
<p>Modular, Model-Based Verification
As discussed in Part 1, we had developed a TLA+ specification of MongoDB’s distributed transactions protocol in a compositional manner, describing the high level protocol behavior while also formalizing the boundary between the distributed aspect of the transactions protocol and the underlying single-node WiredTiger storage engine component. As mentioned, the distributed transactions protocol can be viewed as running atop the lower level storage layer.</p>
<p>When considering the correctness guarantees of the distributed transactions protocol, a subtle aspect is its interaction with the concurrency control mechanisms at each layer of the system. In particular, its interaction with the WiredTiger storage engine API, which has various timestamp-based operations implemented to adequately support the correct operation of the distributed transactions protocol.</p>
<p>Our formal model of the distributed transactions protocol was useful for checking its guarantees, but we also wanted to formalize the contract between this distributed protocol and the underlying storage engine, to check that this interface boundary, as defined in our abstract specification, was matched by the implementation. Leveraging the clean interface boundary in our specification, we developed a tool for automatically checking conformance between the WiredTiger implementation and this abstract storage specification (Storage) we defined. Our storage layer specification itself serves as a useful and precise definition of a subset of the WiredTiger storage engine semantics, but we can also use it for automatically generating test cases to check conformance of WiredTiger semantics to our spec.</p>
<p>Figure 1. Compositional specification of distributed transactions in TLA+ and storage engine component.</p>
<p>Path-based test case generation
To do this, we make use of a modified version of the TLC model checker to first generate the storage component specification’s complete graph of reachable states for finite parameters. We then compute a set of path coverings in this graph, where each path is then converted to an individual test case as a sequence of underlying storage engine API calls. This model-based verification technique allows us to automatically generate tens of thousands of individual test cases for WiredTiger which each check that the implementation matches the behavior defined in our abstract specification, which is also the contract relied on by the high level, distributed transactions protocol.</p>
<p>Figure 2. Model-based test case generation workflow.</p>
<p>For a small, complete finite model (2 keys and 2 transactions), we are able to check conformance with WiredTiger with a generated suite of 87,143 tests, which are generated and executed against the storage engine in around ~40 minutes.</p>
<p>Figure 3. Test case generation statistics for the storage layer model.</p>
<p>Our current storage layer specification can be found in Github, and a link to an interactive, explorable version of it can also be found in Github. In the future, we hope to explore modeling of a more extensive subset of the WiredTiger API and explore alternate state space exploration strategies for generating tests, e.g., randomized path sampling, or other strategies that only require approximate coverage of the state space model. This approach bears similarity to other testing efforts we had explored in the past.</p>
<p>Conclusion
Modeling our distribution transactions protocol enabled verification of protocol correctness, and our use of modularity and compositional verification was instrumental in letting us reason about these high-level correctness properties while also being able to automatically check that our abstract storage interface correctly matches the semantics of the implementation. Similar approaches have been explored recently for model-based verification of distributed systems, as well as other approaches for testing systems using path-based test case generation techniques. In the future, we also remain optimistic about the role that LLMs can play in this type of verification workflow. For example, once such a testing harness for checking conformance is developed, having LLMs autonomously develop a model of the underlying system behavior allows for precise characterization of behavior, along with automated test suite generation capabilities.</p>
<p>You can find more details about this work in our recently published VLDB ‘25 paper, and in the associated Github repo that contains our specifications and code, as well as links to in-browser, explorable versions of our specifications.</p>
]]></description>
      <pubDate>Fri, 27 Feb 2026 15:30:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/engineering/towards-model-based-verification-key-value-storage-engine</link>
      <guid>https://www.mongodb.com/company/blog/engineering/towards-model-based-verification-key-value-storage-engine</guid>
    </item><item>
      <title>Innovating with MongoDB | Customer Successes, February 2026</title>
      <description><![CDATA[<p>Who says that winter is when things slow down? MongoDB has had a busy start to the year, with a steady stream of announcements and product features—all against the backdrop of an industry moving at warp speed. It's been a lot, and it's been a blast!</p>
<p>For example, the energy at January’s MongoDB.local San Francisco—where we announced capabilities to help teams ship production AI faster—was infectious. MongoDB isn’t just starting a new chapter in AI; we’re rewriting the book in real time.</p>
<p>The next generation of AI companies isn't just looking for a temporary place to store data; they’re looking to build on a generational modern data platform. Indeed, the most innovative founders are moving away from rigid, legacy systems and embracing a single, fluid foundation that can grow with them.</p>
<p>At MongoDB.local SF, our message was clear: Choose your data platform strategically in order to ship faster. From our new Voyage 4 models to the general availability of our Intelligent Assistant, we are obsessed with anticipating what developers need next. This assistant is particularly impactful because it embeds MongoDB-specific expertise directly into Compass and MongoDB Atlas, allowing developers to troubleshoot performance without the &quot;context-switching&quot; that traditionally slows them down.</p>
<p>In this issue, I’m thrilled to spotlight four startups who are building the future on the right foundation. You’ll see how Modelence and Thesys are using our flexible document model to eliminate 'operational drag,' allowing them to iterate on AI-native workflows in real time. And then there’s Heidi and Emergent Labs, who both are proving that when you simplify your codebase with a unified platform, you can turn a plan into shipped code at record speeds.</p>
<p>I’ve highlighted their journeys below so you can see exactly how these leaders are setting a new pace and changing their trajectory with MongoDB.</p>
<p>Modelence
Modelence aims to modernize backend infrastructure for the era of AI-assisted development. Traditional relational databases and manual systems create significant operational drag, as their rigid schemas and heavy migrations cannot keep pace with agent-native workflows. These legacy systems struggle with the high-velocity requirements of intelligent coding agents, which must iterate on data structures in real time without causing system downtime.</p>
<p>To build a stable foundation for automation, Modelence integrated MongoDB Atlas as its core data layer. The platform utilizes the flexible document model to align with how intelligent systems think, allowing specifications and runtime events to coexist. This &quot;fit&quot; enables per-tenant isolation and managed credentials, ensuring automated changes remain safe and traceable without the tangle of relational joins.</p>
<p>Standardizing on MongoDB Atlas helped Modelence raise $3 million dollars in its Seed round. The company now moves from planning to running features in minutes, achieving faster iteration loops and fewer regressions.</p>
<p>Thesys
Thesys aims to empower developers by making generative user interfaces—adaptive, real-time components—accessible to everyone. Previously, developers faced the friction of static chat bubbles and hardcoded dashboards that failed to visually represent complex AI outputs. These traditional interfaces forced teams to rebuild UI layers for every use case, which kills user engagement.</p>
<p>To solve these orchestration challenges, Thesys integrated MongoDB Atlas as the operational backbone for its C1 API middleware. The platform utilizes the document model to manage complex entities within a single, high-performance data layer. By removing the friction of mapping unstructured LLM outputs to rigid schemas, engineering teams can now ship updates weekly.</p>
<p>Through the MongoDB for Startups program, Thesys successfully accelerated its go-to-market timeline. By offloading operational management to MongoDB Atlas, Thesys now maintains the agility to evolve its data layer alongside emerging AI trends, ensuring its intelligent interfaces remain high-performing as they scale globally.</p>
<p>Emergent Labs
Emergent Labs sought to democratize software development through “vibe coding,” a platform where AI agents build applications from natural language prompts. The company’s initial use of PostgreSQL caused significant friction, as AI agents frequently failed during schema migrations when non-technical users iteratively changed their application requirements.</p>
<p>By switching to MongoDB Atlas, Emergent Labs provided its agents with a flexible, document-based architecture that matches the JSON data they naturally produce. This eliminated the PostgreSQL migration loops, allowing agents to modify data structures on the fly and deploy isolated, production-ready databases in minutes.</p>
<p>The transition has powered the creation of nearly 2 million applications across 180 countries in just four months. With MongoDB Atlas, the platform now supports complex builds of up to 300,000 lines of code, doubling deployment rates and allowing non-technical entrepreneurs to launch sophisticated tools without traditional engineering resources.</p>
<p>Heidi
Heidi aims to reclaim clinician time by automating administrative tasks. Previously, clinicians spent 40% of their shifts on paperwork, reducing time for patient care. To manage this at scale, Heidi initially used Amazon DocumentDB, but faced critical limitations including mandatory downtime for scaling, high latency, and a lack of native search functionalities essential for complex AI workloads.</p>
<p>To eliminate these bottlenecks, Heidi migrated to MongoDB Atlas for its flexible schema and built-in AI capabilities. Integrating MongoDB Vector Search enables Heidi to perform RAG without &quot;bolt-on&quot; databases, streamlining vector and semantic search under a single API. This technical fit enables developers to unify diverse medical data while meeting stringent healthcare security and regulatory requirements.</p>
<p>Since migrating, Heidi has supported 81 million consultations, returning 18 million hours to the frontline. By offloading management to MongoDB Atlas, Heidi ensures its platform remains high-performing while empowering practitioners to focus on their primary mission: providing compassionate patient care.</p>
<p>Video Spotlight
Before you go, watch TinyFish Co-founder and CEO, Sudheesh Nair, explain how “nano agents” are transforming web-based research.</p>
<p>Learn how TinyFish extracts actionable intelligence from unstructured internet data using MongoDB and Voyage AI.</p>
]]></description>
      <pubDate>Wed, 25 Feb 2026 16:45:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/innovation/innovating-with-mongodb-customer-successes-february-2026</link>
      <guid>https://www.mongodb.com/company/blog/innovation/innovating-with-mongodb-customer-successes-february-2026</guid>
    </item><item>
      <title>Building a Movie Recommendation Engine with Hugging Face and Voyage AI</title>
      <description><![CDATA[<p>This guest blog post is from Arek Borucki, Machine Learning Platform &amp; Data Engineer for Hugging Face - a collaboration platform for the machine learning community. The Hugging Face Hub works as a central place where anyone can share, explore, discover, and experiment with open-source ML. HF empowers the next generation of machine learning engineers, scientists, and end users to learn, collaborate and share their work to build an open and ethical AI future together. With the fast-growing community, some of the most used open-source ML libraries and tools, and a talented science team exploring the edge of tech, Hugging Face is at the heart of the AI revolution.</p>
<p>Traditional movie search relies on filtering by genre, actor, or title. But what if you could search by how you feel? Imagine typing:</p>
<p>&quot;something uplifting after a rough day at work&quot;</p>
<p>&quot;a movie that will make me cry&quot;</p>
<p>&quot;I need adrenaline, can't sleep anyway&quot;</p>
<p>&quot;something to watch with grandma who hates violence&quot;</p>
<p>This is mood-based semantic search: matching your emotional state to movie plot descriptions using AI embeddings.</p>
<p>In this tutorial, you will build a mood-based movie recommendation engine using three powerful technologies: voyage-4-nano (a state-of-the-art open-source embedding model), Hugging Face (for model and dataset hosting), and MongoDB Atlas Vector Search (for storing and querying embeddings at scale).</p>
<p>Why mood-based search?
Genre tags are coarse. A &quot;drama&quot; can be heartwarming or devastating. A &quot;comedy&quot; can be light escapism or dark satire. Traditional filters cannot capture these nuances.</p>
<p>Semantic search solves this by understanding meaning. When you search for &quot;feel-good movie for a rainy Sunday&quot;, the system doesn't look for those exact words. It understands the intent and matches it against plot descriptions that evoke similar feelings.</p>
<p>Architecture overview
The system combines three components from the Hugging Face ecosystem with MongoDB:</p>
<p>voyage-4-nano (Hugging Face Hub): Converts text to embeddings (up to 2048 dimensions, we use 1024)</p>
<p>MongoDB/embedded_movies(Hugging Face Datasets): 1500+ movies with plot summaries, genres, cast</p>
<p>MongoDB Atlas Vector Search: Stores embeddings and performs similarity search</p>
<p>Understanding voyage-4-nano
voyage-4-nano is the smallest model in Voyage AI's latest embedding series, released with open-weights under the Apache 2.0 license. Voyage AI was acquired by MongoDB, and the Voyage 4 series models are now available through MongoDB Atlas. All models in the series (voyage-4-large, voyage-4, voyage-4-lite, and voyage-4-nano) produce compatible embeddings in a shared embedding space, allowing you to mix and match models within a single use case.</p>
<p>Although voyage-4-nano natively supports embeddings up to 2048 dimensions, we deliberately truncate them to 1024 dimensions using its Matryoshka embedding property. In practice, this provides a strong balance between semantic quality, storage efficiency, and vector search latency, while preserving stable ranking behavior.</p>
<p>Sentence Transformers
This tutorial uses Sentence Transformers, a Python library built on top of Hugging Face Transformers. It is specifically designed for working with embedding models and provides a simple API for generating text embeddings.</p>
<p>Why Sentence Transformers instead of raw Transformers? When working with embedding models, you need to handle tokenization, pooling, normalization, and prompt formatting. Sentence Transformers does all of this automatically in a single method call. The code is cleaner, there are fewer potential errors, and you get built-in features like batch processing with progress bars.</p>
<p>Under the hood, Sentence Transformers still uses Hugging Face Transformers to load and run the model.</p>
<p>Configure the development environment
Let's get started!</p>
<p>Create the Project Structure
Code Snippet
1
mkdir mood-movie-search
2
cd mood-movie-search
3
mkdir src
4
touch requirements.txt .env</p>
<p>Install dependencies
Create the requirements.txt file:</p>
<p>Code Snippet
1
cat &lt;<EOF > requirements.txt
2
fastapi&gt;=0.109.0
3
uvicorn&gt;=0.27.0
4
pymongo&gt;=4.6.1
5
sentence-transformers&gt;=3.0.0
6
python-dotenv&gt;=1.0.0
7
datasets&gt;=2.16.0
8
torch
9
EOF</p>
<p>Create a Python virtual environment and install dependencies:</p>
<p>to be continued...</p>
]]></description>
      <pubDate>Tue, 17 Feb 2026 15:30:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/technical/building-a-movie-recommendation-engine-with-hugging-face-and-voyage-ai</link>
      <guid>https://www.mongodb.com/company/blog/technical/building-a-movie-recommendation-engine-with-hugging-face-and-voyage-ai</guid>
    </item><item>
      <title>Edge AI Made Easy: MongoDB and ObjectBox Data Synchronization</title>
      <description><![CDATA[<p>AI is currently undergoing a shift, from massive centralized models to distributed, real-world deployments. While the cloud remains the foundation for large-scale AI training and analytics, AI’s next evolution lies at the edge—where data is created, where decisions require instant action, and where connectivity cannot be guaranteed.</p>
<p>At MongoDB, we are committed to helping organizations build intelligent applications that span cloud and edge environments seamlessly. That’s why we are excited to highlight our work with ObjectBox, a lightweight, high-performance on-device database and sync solution purpose-built for edge AI and offline-first applications.</p>
<p>Together, MongoDB and ObjectBox are making it easier for developers to build hybrid architectures that deliver fast, private, and resilient AI experiences across devices and environments.</p>
<p>Figure 1. Example cloud-edge AI setup.</p>
<p>Example cloud-edge AI setup.
ObjectBox: A purpose-built database for the edge
Founded by Markus Junginger and Dr. Vivien Dollinger, ObjectBox was designed specifically to support edge computing and offline-first use cases. At its core, ObjectBox’s design prioritizes efficiency (including speed, privacy, battery use, and memory consumption) and ease of development.</p>
<p>This strong foundation makes ObjectBox particularly well-suited for next-generation applications that need to run reliably in edge environments—whether on a factory floor, in a retail store, or through a remote healthcare device. ObjectBox empowers developers to build responsive, privacy-conscious applications that work even when connectivity is limited or unavailable.</p>
<p>The platform includes the following features:</p>
<p>A fast, local vector database that stores data directly on devices, supporting on-device AI and local vector search.</p>
<p>Built-in data sync, which keeps data consistent across devices even when offline, and now integrates directly with MongoDB.</p>
<p>Multi-language support, including support for C++, Swift, Flutter, Python, Go, Java, and Kotlin, makes ObjectBox accessible to developers across ecosystems.</p>
<p>These features make ObjectBox an ideal solution for building intelligent applications that run reliably at the edge. This includes a wide range of devices—from smartphones and industrial sensors to automotive ECUs and point-of-sale (POS) devices.</p>
<p>Edge to cloud data sync: The MongoDB Atlas native connector
ObjectBox's new MongoDB Sync Connector combines local-first edge processing with centralized cloud intelligence (i.e., hybrid AI).</p>
<p>This is increasingly important as organizations seek to process data closer to where it is generated—at the edge—while still benefiting from the power and scalability of the cloud. Managing this dual environment efficiently is key to unlocking performance, resilience, and real-time insights.</p>
<p>Developers can now use ObjectBox for real-time, low-latency operations on edge devices while syncing relevant data to MongoDB Atlas, enabling organizations to achieve:</p>
<p>Long-term storage</p>
<p>Centralized dashboards and analytics</p>
<p>AI model retraining</p>
<p>Cloud-based coordination and automation</p>
<p>This hybrid architecture aligns with how modern applications are being built—distributing intelligence where it makes the most sense.</p>
<p>Figure 2. Central Sync for ObjectBox and MongoDB Atlas.</p>
<p>Central Sync for ObjectBox and MongoDB Atlas.
Figure 3. Edge setup for ObjectBox and MongoDB Atlas.</p>
<p>Edge setup for ObjectBox and MongoDB Atlas
Bringing AI to the edge isn’t just about performance. It is also about privacy, sustainability, and user experience. By processing data locally:</p>
<p>Privacy is enhanced—sensitive information stays on the device.</p>
<p>Latency is reduced—actions can be taken instantly.</p>
<p>Bandwidth usage drops—lowering costs and improving efficiency.</p>
<p>Battery and CPU use are optimized—extending the life of edge devices.</p>
<p>This aligns with MongoDB’s commitment to empowering developers to build intelligent, resilient, and user-centric applications—wherever they need.</p>
<p>Real-world use cases
Industrial IoT
Industrial IoT (IIoT) is a prime example of where edge and cloud must work together. On a modern factory floor, everything from low-frequency brownfield devices to high-frequency greenfield machines generates vast amounts of data.</p>
<p>Data generated include vibration levels, temperature readings, pressure changes, and machine runtimes. In short, the sort of data that often needs to be processed locally to monitor systems in real time and to trigger alerts when anomalies or threshold breaches occur.</p>
<p>With ObjectBox running on device, this critical operational data can be captured, analyzed, and used onsite and within AI applications immediately, even with limited or no connectivity. ObjectBox is designed for efficient, high-throughput I/O, enabling real-time processing of high-frequency data streams even on resource-constrained edge devices.</p>
<p>It supports a broad range of data types—from objects and time series data, to tree structures (e.g., UMATI) and vector embeddings—with a lightweight database (typically only a few MB in size). This makes it well-suited for production deployments that need to integrate modern AI and edge workloads with legacy systems and heterogeneous hardware, typical for the manufacturing industry.</p>
<p>The ObjectBox Sync Server can run on almost any device, enabling fast, reliable, and secure offline data synchronization across the shop floor. Paired with the MongoDB Sync Connector, the most relevant insights can then be synced to the cloud, where they can be aggregated, enriched with AI models, and stored for long-term analysis (like anomaly detection or RUL models).</p>
<p>This hybrid architecture enables advanced use cases such as predictive maintenance, where historical records, live equipment data, and machine learning models are combined to forecast potential failures before they happen. (For more details, explore our Predictive Maintenance solutions library.)</p>
<p>With this architecture, the system provides:</p>
<p>Real-time responsiveness on the shop floor</p>
<p>Centralized analytics and cross-site dashboards at cloud scale</p>
<p>Support for predictive maintenance workflows in offline or intermittently connected environments</p>
<p>Unified data access across heterogeneous data sources, from individual sensors to full production lines</p>
<p>By combining low-latency edge processing with centralized intelligence, developers and operators gain visibility into how equipment is performing from the health of a single machine, to trends across an entire fleet or factory network—without compromising performance or reliability.</p>
<p>Figure 4. Industrial IoT.</p>
<p>Industrial IoT.
Point-of-sale systems
Point-of-sale (POS) systems—such as those used in restaurants—are another strong fit for edge AI and hybrid architectures. During peak dining hours, cashiers and servers need instant, reliable access to menus, order histories, and payment processing—even if their internet connection is unstable or drops.</p>
<p>With ObjectBox’s offline-first, on-device database, restaurants can process real-time transactions, track inventory, and personalize customer experiences with local AI directly at the POS terminal. This helps store owners avoid service disruptions or lost sales.</p>
<p>With MongoDB Sync Connector, relevant data (like sales trends, customer preferences, and stock levels) syncs to MongoDB Atlas. This enables restaurant managers to run centralized dashboards, perform demand forecasting, and train AI models that optimize staffing, menu design, and supply chain planning.</p>
<p>In summary, this hybrid POS architecture of local-first responsiveness and cloud-powered insights ensures:</p>
<p>Seamless customer experiences without downtime</p>
<p>Accurate, up-to-date data whenever needed</p>
<p>Scalable, resilient operations across multiple restaurant locations</p>
<p>Figure 5. Point-of-sale systems.</p>
<p>Point-of-sale systems.
What’s next
With the release of ObjectBox 5.0 and its new MongoDB Connector, ObjectBox has taken a major step toward simplifying user‑specific data sync at the edge. Together, MongoDB and ObjectBox offer a modern foundation for building intelligent, distributed applications that run reliably from device to cloud. This partnership makes it easier than ever to pair low‑latency edge data processing with the flexibility, security, and global reach of MongoDB Atlas.</p>
]]></description>
      <pubDate>Tue, 03 Feb 2026 15:30:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/innovation/edge-ai-made-easy-mongodb-and-objectbox-data-synchronization</link>
      <guid>https://www.mongodb.com/company/blog/innovation/edge-ai-made-easy-mongodb-and-objectbox-data-synchronization</guid>
    </item><item>
      <title>MongoDB.local San Francisco 2026: Ship Production AI, Faster</title>
      <description><![CDATA[<p>Today at MongoDB.local San Francisco, we announced capabilities that collapse the distance between AI prototype and production.</p>
<p>Building AI applications means solving real problems: keeping conversational context clean and queryable, retrieving the right information from thousands of past interactions, connecting AI agents to your data without custom plumbing. These aren't theoretical challenges, they're the friction points that slow teams down every day.</p>
<p>The AI era demands more from your data platform. MongoDB gives you everything you need to build quickly.</p>
<p>Voyage AI: the best gets better
Embedding models can make or break AI search experiences. We're proud that voyage-3-large has been the world's top-performing embedding model on Hugging Face's RTEB benchmark since its inception.</p>
<p>But we didn’t rest on our laurels. There’s a new model at the top of the charts.</p>
<p>Today, we're pleased to announce that the Voyage 4 model family is now generally available. The best just got better. The voyage-4 series models operate in a shared embedding space, allowing for cross-model compatibility and unprecedented flexibility to optimize for accuracy, speed, or cost. This release also includes voyage-4-nano, our first open-weight model available on HuggingFace, perfect for local development.</p>
<p>Additionally, we're launching the new voyage-multimodal-3.5 model, which has been specifically trained to support video content alongside text and images. For developers building multimodal AI applications, this represents a significant leap forward in handling diverse content types within a single retrieval system. Best of all, upgrading is remarkably straightforward—you can simply change the model parameter to &quot;voyage-multimodal-3.5&quot; in your API call, instantly unlocking video capabilities without needing to refactor your existing codebase or change your application architecture.</p>
<p>Finally, we’re announcing the public preview of the Embedding and Reranking API on MongoDB Atlas, providing API support for Voyage AI models. While enabling standalone usage of the models with any technology stack, the API benefits from the robust security and scalability standards of MongoDB. By bringing critical components into a single control plane and interface, it eliminates the need to manage separate vendors and significantly reduces operational overhead.</p>
<p>Automated Embedding, convenience built into MongoDB Community
Persistence matters. An AI with amnesia isn’t helpful; users need systems to remember context from minutes, hours, and weeks ago. Every interaction is a goldmine of preferences, patterns, and behavior that should make the next interaction smarter.</p>
<p>But storing conversation history in a database isn't enough. Simple storage solves nothing if you can't retrieve the right information at the right time. The real challenge is intelligent retrieval: finding relevant context across thousands of past interactions, filtered by metadata and user attributes, without your system buckling under production load. This is where vector search becomes critical—enabling semantic search that captures meaning, not just keywords, while operating on your real-time operational data. And this is where MongoDB's approach eliminates a major pain point: the need to sync data between separate systems for vectors and application data.</p>
<p>Until now, generating and storing these vectors required overhead—development time, infrastructure management, and cognitive load. No longer.</p>
<p>We're introducing Automated Embedding for MongoDB Community Edition in public preview. MongoDB Community Edition now handles the complexity of managing embedding models automatically, giving developers high-accuracy semantic search in the database while maintaining flexibility to use any LLM provider or orchestration framework. Automated Embedding offers one-click automatic embedding directly inside MongoDB, which eliminates the need to sync data and manage external models. It’s an easy way to get high quality embedding natively.</p>
<p>Best-in-class retrieval shouldn't require infrastructure work—Automated Embedding in MongoDB Vector Search delivers on that promise. Automated Embedding in MongoDB Vector Search is available now in Community Edition, with Atlas access coming soon.</p>
<p>Precise text filtering for advanced search use cases
Today, we announced the launch of Lexical Prefilters for Vector Search. This addresses a long-standing request from developers building semantic search interfaces who need advanced text filtering alongside vector operations.</p>
<p>The new syntax enables powerful text filtering capabilities—fuzzy matching, phrase search, wildcards, and geospatial filtering—as prefilters for vector search. This leverages full text analysis capabilities while maintaining the semantic power of vector search. We've introduced a new vector data type in $search index definitions and a vectorSearch operator within the $search aggregation stage to make this work seamlessly.</p>
<p>This replaces the knnBeta operator with a cleaner, more powerful approach. For teams already using lexical and vector search together, this provides a simplified migration path with significantly expanded capabilities.</p>
<p>Intelligent assistance wherever you work
MongoDB’s intelligent assistant is generally available in MongoDB Compass. The assistant provides in-app guidance for debugging connection errors, optimizing query performance, and learning best practices, all without leaving your development environment. You can even query your database using natural language through read-only database tools that require your approval before execution, allowing for deeper contextual awareness of your data.</p>
<p>The assistant was built to address real friction: developers switching between multiple tools and documentation tabs, waiting for support responses, or getting generic advice from general-purpose AI chatbots that don't understand MongoDB-specific contexts. Now, tailored guidance is available instantly, right where you're working.</p>
<p>The modernized Atlas Data Explorer interface brings the Compass experience directly into the Atlas web UI, addressing a critical gap for teams with security policies that restrict desktop application usage. Users can now perform sophisticated query development, optimization, bulk operations, and complex aggregations—all with AI assistance—across all MongoDB Atlas clusters in a unified web interface.</p>
<p>Whether you're troubleshooting a connection issue, optimizing a slow query, or learning how to structure an aggregation pipeline, the intelligent assistant delivers MongoDB-specific expertise without context switching. Try the intelligent assistant in the modernized Atlas Data Explorer now.</p>
<p>The engine behind MongoDB Search and Vector Search is now available under SSPL
Finally, mongot, the engine powering MongoDB Search and Vector Search, is now publicly available under SSPL. While still in preview, after years of development and investment, we're making the source code of this core technology available to the community, expanding our unified search architecture beyond Atlas to every MongoDB deployment.</p>
<p>mongot runs separately from mongod, MongoDB's core database process, and is the foundation that makes powerful search native to MongoDB. Releasing mongot under SSPL means full transparency for security audits and debugging complex edge cases. Developers can dive into mongot's architecture, understand how search and vector operations work under the hood, and help shape the future of search at MongoDB.</p>
<p>A modern data platform that evolves with your needs
These announcements reflect our commitment to anticipating what developers need as AI development matures. Vector search, time series, stream processing, queryable encryption, Atlas itself—we've consistently delivered on emerging requirements. &quot;If you're building an early-stage company that is going to scale very rapidly, you need a database solution that isn't going to break under the load of a huge volume of users,&quot; said Eno Reyes, Co-founder and CTO of Factory. &quot;You need a fast-moving team with a reliable solution, and there really is one option in this space—and it's MongoDB.&quot;</p>
<p>Rabi Shanker Guha, CEO of Thesys, put it this way: “MongoDB helps us move fast in an ever-changing world. The best database is the one you don’t have to think about—it just works exactly where and how you need it. That’s MongoDB for us.”</p>
<p>Ship faster, scale confidently
Each capability we announced today addresses real friction in the AI development workflow and in the developer experience. We're not asking developers to choose between structured data and vectors, between performance and flexibility, or between rapid iteration and production readiness.</p>
<p>The promise is straightforward: ship faster, scale confidently, and focus on what makes your AI application unique—not on managing database infrastructure. In an ecosystem crowded with point solutions and retrofitted legacy systems, MongoDB is a modern data platform built for the long haul.</p>
]]></description>
      <pubDate>Thu, 15 Jan 2026 20:15:39 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/events/mongodb-local-san-francisco-2026-ship-production-ai-faster</link>
      <guid>https://www.mongodb.com/company/blog/events/mongodb-local-san-francisco-2026-ship-production-ai-faster</guid>
    </item><item>
      <title>Vision RAG: Enabling Search on Any Documents</title>
      <description><![CDATA[<p>Information comes in many shapes and forms. While retrieval-augmented generation (RAG) primarily focuses on plain text, it overlooks vast amounts of data along the way. Most enterprise knowledge resides in complex documents, slides, graphics, and other multimodal sources. Yet, extracting useful information from these formats using optical character recognition (OCR) or other parsing techniques is often low-fidelity, brittle, and expensive.</p>
<p>Vision RAG makes complex documents—including their figures and tables—searchable by using multimodal embeddings, eliminating the need for complex and costly text extraction. This guide explores how Voyage AI’s latest model powers this capability and provides a step-by-step implementation walkthrough.</p>
<p>Vision RAG: Building upon text RAG
Vision RAG is an evolution of traditional RAG built on the same two components: retrieval and generation.</p>
<p>In traditional RAG, unstructured text data is indexed for semantic search. At query time, the system retrieves relevant documents or chunks and appends them to the user’s prompt so the large language model (LLM) can produce more grounded, context-aware answers.</p>
<p>Figure 1. Text RAG with Voyage AI and MongoDB.</p>
<p>Text RAG with Voyage AI and MongoDB
Enterprise data, however, is rarely just clean plain text. Critical information often lives in PDFs, slides, diagrams, dashboards, and other visual formats. Today, this is typically handled by parsing tools and OCR services. Those approaches create several problems:</p>
<p>Significant engineering effort to handle many file types, layouts, and edge cases</p>
<p>Accuracy issues across different OCR or parsing setups</p>
<p>High costs when scaled across large document collections</p>
<p>Next-generation multimodal embedding models provide a simpler and more cost-effective alternative. They can ingest not only text but also images or screenshots of complex document layouts, and generate vector representations that capture the meaning and structure of that content.</p>
<p>Vision RAG uses these multimodal embeddings to index entire documents, slides, and images directly, even when they contain interleaved text and images. This enables them to be searchable via vector search without requiring heavy parsing or OCR. At query time, the system retrieves the most relevant visual assets and feeds them, along with the text prompt, into a vision-capable LLM to inform its answer.</p>
<p>Figure 2. Vision RAG with Voyage AI and MongoDB.</p>
<p>Vision RAG with Voyage AI and MongoDB
As a result, vision RAG enables LLM-based systems with native access to rich, multimodal enterprise data, while reducing engineering complexity and avoiding the performance and cost pitfalls associated with traditional text-focused preprocessing pipelines.</p>
<p>Voyage AI’s latest multimodal embedding model
The multimodal embedding model is where the magic happens. Historically, building such a system was challenging due to the modality gap. Early multimodal embedding models, such as contrastive language-image pretraining (CLIP)-based models, processed text and images using separate encoders. Because the outputs were generated independently, results were often biased toward one modality, making retrieval across mixed content unreliable. These models also struggled to handle interleaved text and images, a critical limitation for vision RAG in real-world environments.</p>
<p>Voyage-multimodal-3 adopts an architecture similar to modern vision-capable LLMs. It uses a single encoder for both text and visual inputs, closing the modality gap and producing unified representations. This ensures that textual and visual features are treated consistently and accurately within the same vector space.</p>
<p>Figure 3. CLIP-based architecture vs. voyage-multimodal-3’s architecture.</p>
<p>CLIP-based architecture vs. voyage-multimodal-3’s architecture
This architectural shift enables true multimodal retrieval, making vision RAG a viable and efficient solution. For more details, refer to the voyage-multimodal-3 blog announcement.</p>
<p>Implementation of vision RAG
Let’s take a simple example and showcase how to implement vision RAG. Traditional text-based RAG often struggles with complex documents, such as slide decks, financial reports, or technical papers, where critical information is often locked inside charts, diagrams, and figures.</p>
<p>By using Voyage AI’s multimodal embedding models alongside Anthropic’s vision-capable LLMs, we can bridge this gap. We will treat images (or screenshots of document pages) as first-class citizens, retrieving them directly based on their visual and semantic content and passing them to a vision-capable LLM for reasoning.</p>
<p>To demonstrate this, we will build a pipeline that extracts insights from the charts and figures of the GitHub Octoverse 2025 survey, which simulates the type of information typically found in enterprise data.</p>
<p>The Jupyter Notebook for this tutorial is available on GitHub in our GenAI Showcase repository. To follow along, run the notebook in Google Colab (or similar), and refer to this tutorial for explanations of key code blocks.</p>
<p>Step 1: Install necessary libraries
First, we need to set up our Python environment. We will install the voyageai client for generating embeddings and the anthropic client for our generative model.</p>
<p>....</p>
]]></description>
      <pubDate>Mon, 12 Jan 2026 16:00:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/technical/vision-rag-enabling-search-on-any-documents</link>
      <guid>https://www.mongodb.com/company/blog/technical/vision-rag-enabling-search-on-any-documents</guid>
    </item><item>
      <title>That’s a Wrap: MongoDB’s 2025 in Review &amp; 2026 Predictions</title>
      <description><![CDATA[<p>It’s nearly the end of the year—again! That means it’s time for an end-of-year blog post that expresses disbelief at the passage of time. Which, as the saying goes, flies when you’re having fun. And definitely when you’re as busy as MongoDB was in 2025.</p>
<p>It was a big year for the company—and more importantly, for the tens of thousands of customers and millions of developers who rely on MongoDB’s modern data platform for their most mission-critical workloads. At MongoDB, everything we do starts with our obsession with customers and their needs, and if there’s a theme to MongoDB’s 2025, it was (and will continue to be) enabling customer innovation and helping them succeed in the AI era.</p>
<p>So here are a few highlights of how MongoDB acted on behalf of customers in 2025. From the acquisition of Voyage AI to customer success across industries, a lot happened in 2025. Let’s go!*</p>
<p>*Read to the end for 2026 thoughts.</p>
<p>2025: The (MongoDB) year that was
Voyage AI, modernization, and search
In February, MongoDB announced the acquisition of Voyage AI, a pioneer in embedding and reranking models, to enhance the accuracy of AI applications. Integrating Voyage AI's advanced retrieval technology with MongoDB’s modern, AI-ready data platform addresses a critical challenge: LLM model hallucinations caused by a lack of context. By improving retrieval accuracy for specialized domains like finance and law, the integration enables businesses to deploy AI for mission-critical use cases.</p>
<p>To learn more, see the MongoDB Voyage AI page.</p>
<p>Then, in September, we launched MongoDB AMP, an AI-powered Application Modernization Platform. AMP is designed to accelerate the transformation of legacy applications through a combination of AI-powered tooling, a proven delivery framework, and expert guidance (tools, techniques, and talent) to help enterprises reduce technical debt and modernize 2-3 times faster.</p>
<p>Want more? Sure you do! Check out this short video.</p>
<p>MongoDB also announced the addition of search and vector search capabilities to MongoDB Community Edition and MongoDB Enterprise Server. This allows developers to build and test AI-native applications, including those using retrieval-augmented generation (RAG), in local or on-premises environments. Previously exclusive to MongoDB Atlas, these features enable secure, hybrid deployments where sensitive data can remain on-premises while still leveraging advanced search tools.</p>
<p>Here’s a (slightly less short) video about search and vector search on Enterprise Server.</p>
<p>Growing and scaling with MongoDB
As noted, everything we do at MongoDB starts with our obsession with customers. 2025 was another banner year for customer success and innovation—we were inspired by what organizations of every shape and size, across industries and geographies, built with MongoDB in 2025. Here are just two of the many stories our customers shared in 2025; much more can be found in my colleague Katie Palmer’s blog series, Innovating with MongoDB.</p>
<p>Factory
By combining the Atlas modern data platform with Voyage AI’s high-performance embeddings, the AI-native startup Factory—which uses AI agents called Droids to accelerate software development lifecycles for organizations—consolidated its fragmented tech stack. This enabled superior code retrieval, simplified operations, and provided the scalability needed to process billions of tokens daily.</p>
<p>McKesson
McKesson, a global pharmaceutical distributor, replaced its monolithic legacy infrastructure with MongoDB Atlas to meet strict drug tracing mandates. By adopting our modern cloud data platform, McKesson scaled its operations 300x, managing tracking data for 1.2 billion containers annually without latency, and ensuring compliance and patient safety while reducing developer complexity.</p>
<p>For more, check out the video of McKesson at MongoDB.local NYC from September.</p>
<p>From niche NoSQL to enterprise powerhouse
As senior MongoDB engineer and Technical Fellow Ashish Kumar put it earlier this year, “through a sustained and deliberate engineering effort,” MongoDB has gone from a (seemingly) niche NoSQL solution to a trusted enterprise standard, and now delivers “the high availability, tunable consistency, ACID transactions, and robust security that enterprises demand.”</p>
<p>A new era of leadership
The face of MongoDB has also changed—our CFO, Mike Berry, joined the company in April, and Dev Ittycheria stepped down as CEO in November, after more than 11 years leading the company (including its 2017 IPO). In a LinkedIn post about his role, new MongoDB CEO CJ Desai noted that the company is “at the forefront of a new data revolution, unlocking the next wave of productivity and intelligence.”</p>
<p>“Having spent my career building and scaling technology platforms, I’ve always been drawn to companies defined by clarity of vision, relentless organic innovation, and a customer-first culture. MongoDB exemplifies all three,” said Desai.</p>
<p>We couldn’t agree more. Onward!</p>
<p>Reading the 2026 tea leaves
So what might 2026 bring (for MongoDB and tech at large)? Here are a handful of our leaders’ predictions:</p>
<p>“As much as people want to talk about Artificial General Intelligence (AGI), we’re still in the phase where most AI use cases automate redundant tasks but benefit from human-in-the-loop checks. Organizations that use AI to complete work that historically is a drain on human resources—but then uses people to carefully verify what AI builds, apply governance frameworks, and maintain accountability across the data lifecycle—will be more successful.”
—Pete Johnson, Field CTO, AI, MongoDB</p>
<p>“After years of inflated expectations and unsustainable spending, the AI industry is trapped in a bubble where companies reflexively attempt to deploy LLMs at every problem, driving up costs with minimal to no return. Businesses that break free from this spending cycle are the ones that understand the need to ground LLM responses in factual data and learn from prior mistakes. We believe the best way to do this will be with highly accurate embedding models and rerankers for reliable data retrieval.”
—Frank Liu, Staff Product Manager, MongoDB</p>
<p>&quot;In 2026, cloud independence will evolve from strategic preference to existential imperative across enterprises of every scale. The outages and disruptions of recent years have exposed a fundamental truth: in an always-on digital economy—where commerce, mobility, governance, and even public safety depend on uninterrupted access to cloud services—single-provider reliance is no longer a calculated risk, but a systemic vulnerability.</p>
<p>Compounding this is the inexorable rise of data sovereignty. Regulatory regimes worldwide now demand precise jurisdictional control over data residency, rendering rigid cloud commitments incompatible with compliance at global scale.</p>
<p>The defining competitive advantage will belong to organizations that transcend fragile prevention theater and engineer true infrastructural resilience: architectures inherently portable, data frictionlessly mobile, and operations autonomously sustained across heterogeneous clouds through AI-orchestrated redundancy.</p>
<p>In short, the winners will not merely mitigate downtime—they will design systems that render the concept obsolete.&quot;
—Ben Cefalo, SVP, Head of Core Products, MongoDB</p>
<p>Happy holidays and happy New Year, everyone!</p>
]]></description>
      <pubDate>Mon, 22 Dec 2025 17:34:58 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/mongodb-2025-in-review-2026-predictions</link>
      <guid>https://www.mongodb.com/company/blog/mongodb-2025-in-review-2026-predictions</guid>
    </item><item>
      <title>Token-count-based Batching: Faster, Cheaper Embedding Inference for Queries</title>
      <description><![CDATA[<p>Embedding model inference often struggles with efficiency when serving large volumes of short requests—a common pattern in search, retrieval, and recommendation systems. At Voyage AI by MongoDB, we call these short requests queries, and other requests are called documents. Queries typically must be served with very low latency (typically 100–300 ms).</p>
<p>Queries are typically short, and their token-length distribution is highly skewed. As a result, query inference tends to be memory-bound rather than compute-bound. Query traffic is pretty spiky, so autoscaling is too slow. In sum, serving many short requests sequentially is highly inefficient.</p>
<p>In this blog post, we explore how batching can be used to serve queries more efficiently. We first discuss padding removal in modern inference engines, a key technique that enables effective batching. We then present practical strategies for forming batches and selecting an appropriate batch size. Finally, we walk through the implementation details and share the resulting performance improvements: a 50% reduction in GPU inference latency—despite using 3X fewer GPUs.</p>
<p>Padding removal makes effective batching possible
Given the patterns of query traffic, one straightforward idea is: can we batch them to improve inference efficiency? Padding removal, supported in inference engines like vLLM and SGLang, makes efficient batching possible.</p>
<p>Most inference engines accept requests in the form (B, S), where B is the sequence number in the batch, and S is the maximum sequence length. Sequences should be padded to the max sequence length so that tensors line up. But that convenience comes at a cost: padding tokens do no useful work but still consume compute and memory bandwidth, so latency scales with B × S instead of the actual token count. With serving large volumes of short requests, this wastes a large share of compute and can inflate tail latency.Padding removal and variable-length processing fix this by concatenating all active sequences into one long &quot;super sequence&quot; of length T = Σtoken_count_i, where token_count_i is the token count in sequence i. Inference engines like vLLM and SGLang can process this combined sequence. Attention masks and position indices ensure that each sequence only attends to its own tokens. As a result, inference time now tracks T rather than B × S, aligning GPU work with what matters.</p>
<p>Proposal: token-count-based batching
In Voyage AI, we proposed and built token-count-based batching, batching queries (short requests) by total token count in the batch (Σtoken_count_i), rather than by total request count or arbitrary time windows.</p>
<p>Time-window batching is inefficient when serving many short requests. Time-window batching swings between under- and over-filled batches depending on traffic. A short window keeps latency low but produces small, under-filled batches; a long window improves utilization but adds queueing delay. Traffic is bursty, so a single window size oscillates between under- and over-filling. Time-window batching introduces variability in resource utilization, causing the system to shift between memory-bound and compute-bound operations. Request-count batching has similar problems.</p>
<p>Diagram illustrating the difference between request-number-based batching and token-count-based batching.
Figure 1. Request-number-based batching VS token-count-based batching.
Token-count batching aligns the batch size (total token count in the batch) with the actual compute required. When many queries arrive close together, we group them by token counts so the GPU processes a larger combined workload in a single forward pass. Based on our experiment, token-count-based batching amortizes fixed costs, reduces per-request latency and cost, and increases throughput and model flops utilization (MFU).</p>
<p>What is the optimal batch size?
Our inference-latency-vs-token-count profiling of query inference shows a clear pattern: latency is approximately flat up to a threshold (saturation point) and then becomes approximately linear. For small requests, fixed per-request overheads (like GPU scheduling, memory movement, pooling and normalization, etc.) dominate, and latency stays nearly constant; beyond that point, latency scales with token count. The threshold (saturation point) depends on factors like the model architecture, inference engines, and GPU. For our voyage-3 model running on A100, the threshold is about 600 tokens.</p>
<p>Line graph showing that GPU inference latency for the Voyage-3 model on A100 is approximately flat for batches up to about 600 tokens (the saturation point), and then increases linearly with the total token count.
Figure 2. Inference latency vs token count for Voyage-3 on A100.
Based on the data of inference latency vs token count, we can analyze FLOPs Utilization (MFU) vs. token count and throughput vs. token count, which are shown in the following graph. We observe that Model FLOPs Utilization (MFU) and throughput scale approximately linearly with token count until reaching a saturation point. Most of our queries inferences are in the memory-bound zone, far away from the saturation point.</p>
<p>Approximated diagram showing Model FLOPs Utilization (MFU) and throughput scaling linearly with token count up to a saturation point, and inference latency remaining flat until the saturation point, after which it also scales linearly. This illustrates how batching can shift inference from memory-bound to compute-bound.
Figure 3. Approximated diagram of MFU/throughput/inference latency vs token count.
Batching short requests can move the inference from memory-bound to compute-bound. If we choose the saturation point in Figure 3 as the batch size (total token count in the batch), the latency and throughput/MFU can be balanced and optimized.</p>
<p>Queue design: enabling token-count-based batching
Token-count–based batching needs a data system that does more than simple FIFO delivery. The system has to attach an estimated token_count to each request, peek across pending requests, and then atomically claim a subset whose total tokens fit the optimal batch size (Σtoken_count_i ≤ optimal_batch_size). Without these primitives, we either underfill the GPU—wasting fixed overheads—or overfill it and spike tail latency.</p>
<p>General-purpose brokers like RabbitMQ and Kafka are excellent at durability, fan-out, and delivery, but their batching knobs are message count/bytes, not tokens. RabbitMQ’s prefetch is request-count-based, and messages are pushed to consumers, so there’s no efficient way to peek and batch requests by Σtoken_count_i. Kafka batches by bytes/messages within a partition; token count varies with text and tokenizer, so there is no efficient way to batch requests by Σtoken_count_i.</p>
<p>So there are two practical paths to make token-count-based batching work. One is to place a lightweight aggregator in front of Kafka/RabbitMQ that consumes batches by token counts and then dispatches batches to model servers. The other is to use a store that naturally supports fast peek + conditional batching—for example, Redis with Lua script. In our implementation, we use Redis because it lets us atomically “pop up to the optimal batch size” and set per-item TTLs within a single lua script call. Whatever we choose, the essential requirement is the same: the queue must let our system see multiple pending items, batch by Σtoken_count_i, and claim them atomically to keep utilization stable and latency predictable.</p>
<p>Our system enqueues each embedding query request into a Redis list as:</p>
<p>Code Snippet
1
&lt;token_count&gt;::<timestamp>::<query></p>
<p>Model servers call lua script atomically to fetch a batch of requests until the optimal batch size is reached. The probability of Redis losing data is very low. In the rare case that it does happen, users may receive 503 Service Unavailable errors and can simply retry. When QPS is low, batches are only partially filled and GPU utilization remains low, but latency still improves.</p>
<p>Diagram illustrating the token-count-based batching implementation, where incoming embedding query requests are enqueued into a Redis list and model servers atomically fetch a batch of requests up to the optimal total token count using a Lua script.
Figure 4. Batching implementation.
Results
We ran a production experiment on the Voyage-3-Large model serving, comparing our new pipeline (query batching + vLLM) against our old pipeline (no batching + Hugging Face Inference). We saw a 50% reduction in GPU inference latency—despite using 3X fewer GPUs.</p>
<p>We gradually onboarded 7+ models to the above query batching solution, and saw the following results (note that these results are based on our specific implementations of the “new” and “old” pipelines, and are not necessarily generalizable):</p>
<p>vLLM reduces GPU inference time by up to ~20 ms for most of our models.</p>
<p>GPU utilization and MFU increase, reflecting reduced padding, better amortization of per-batch overhead, and inference moving closer to the compute-bound regime.</p>
<p>Throughput improves by up to 8× via token-count–based batching.</p>
<p>Some model servers see P90 end-to-end latency drop by 60+ ms as queuing time is reduced under resource contention.</p>
<p>P90 end-to-end latency is more stable during traffic spikes, even with fewer GPUs.</p>
<p>In summary, combining padding removals with token-count-based batching improves throughput and reduces latency, while improving resource utilization and lowering operational costs during short query embedding inference.</p>
]]></description>
      <pubDate>Thu, 18 Dec 2025 15:00:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/engineering/token-count-based-batching-faster-cheaper-embedding-inference-for-queries</link>
      <guid>https://www.mongodb.com/company/blog/engineering/token-count-based-batching-faster-cheaper-embedding-inference-for-queries</guid>
    </item><item>
      <title>MongoDB Announces Leadership Transition</title>
      <description><![CDATA[<p>Dev Ittycheria, President and Chief Executive Officer, shared the following message with MongoDB employees this morning.</p>
<p>This is the hardest email I have ever had to write to all of you. If you have not seen the announcement, I have decided to retire as CEO. Effective November 10, 2025, Chirantan “CJ” Desai will become the new CEO of MongoDB.</p>
<p>This was not an easy decision for me. The process to get to this point has been deeply emotional, as I care profoundly about MongoDB and the people who have made the company what it is today.</p>
<p>This news may come as a surprise, and for some, perhaps even a shock. That’s natural. Leadership transitions can evoke a range of reactions. I want to share why this is happening, and why it’s the right thing for MongoDB.</p>
<p>Every personnel change, including the most senior leadership changes, involves two key decisions: first, recognizing that it is the right time for change, and second, selecting the best person to replace the person leaving. This email is intended to explain both decisions.</p>
<p>Earlier this year, as part of our regular succession planning process, the Board and I discussed my long-term commitment. They asked if I would continue as CEO for another five years. After many conversations with my family and the Board, I realized I could not make that commitment. Some CEOs see their title as their identity. I do not. My core responsibility is to serve in the company's best interests. The company is primed for a new leader. One with a fresh perspective, grounded in experience and skills needed to guide MongoDB through its next evolution as a company, what we call MongoDB 3.0.</p>
<p>Consequently, I informed the Board that I would commit to two more years to help find a successor. That began the search process for a suitable successor. To our surprise and delight, what we thought would easily take 12 to 24 months happened much faster than anyone expected. After engaging with multiple qualified candidates, we found the right successor in CJ.</p>
<p>CJ is uniquely qualified for this role. CJ brings the rare growth-at-scale experience that will help continue to build MongoDB into an iconic technology company. At ServiceNow, he was the only executive to work directly with three of its highly regarded public company CEOs and played a pivotal role in organically scaling the company from just over $1 billion to more than $10 billion in revenue. Only a handful of independent software companies have ever reached that milestone. CJ helped transform ServiceNow from a product company to a platform company, scaled engineering, drove go-to-market excellence, and engaged deeply with investors. More recently, as President of Product and Engineering at Cloudflare, he helped fuel strong growth and stock performance.</p>
<p>CJ also possesses the personal qualities needed to succeed as CEO. He is humble, eager to learn, and wants to draw on the perspectives of the people at MongoDB and other stakeholders to inform his thinking. This blend of experience, judgment, and character gives me full confidence that he is well-equipped to lead MongoDB through its next phase of growth.</p>
<p>I often think of MongoDB’s journey as a long and extraordinary expedition. For the past eleven years, I have had the privilege of serving as its guide, helping chart the course, rally the team, and climb together through both calm and challenging terrain. Along the way, we have reached remarkable summits and proven what is possible through relentless innovation, persistence, and teamwork. Now it is time for a new guide to lead the next stage of the ascent and take MongoDB to even greater heights. CJ is the right leader to take MongoDB to the next summit.</p>
<p>MongoDB is on a strong footing, with a clear strategy, an exceptional leadership team, a product platform that is more relevant than ever, and a business that is executing well. The rise of AI and the explosion of data-intensive applications play directly to MongoDB’s strengths. Our technology sits at the center of how modern applications are built and how organizations will harness data to power intelligent, adaptive systems. I am confident MongoDB is perfectly positioned to capture this next wave of innovation.</p>
<p>As for me, I am not running away from MongoDB or leaving to join another company as CEO. I will remain on the Board and work closely with CJ to ensure a seamless transition. Over the years, this role has demanded an enormous amount of focus and energy; as a result, there are many things I’ve missed doing along the way. I’m looking forward to being more present for those moments — from simple time with my family to experiences and travel we’ve long put off. I plan to hold on to my MongoDB stock, as I firmly believe in the people and the opportunity, knowing that MongoDB’s best days are ahead of it.</p>
<p>Yes, change can be unsettling. I’m sure you will have many questions about this change, such as why now, why CJ is the best person to lead the company, and what this means for you. We will hold an all-hands meeting tomorrow at 10:30AM ET to discuss this transition, introduce CJ and take your questions.</p>
<p>That being said, I want to emphasize that the right change at the right time is how great companies get stronger. Just as a championship team refreshes its roster to stay competitive, MongoDB is bringing in new leadership, including other recent C-suite leaders who came before CJ, to drive our next phase of growth. This is not an ending; it’s the founding of a new moment.</p>
<p>I am incredibly proud of what we have built together and genuinely excited about what lies ahead with CJ leading us forward. I also want to thank each of you for making this journey so meaningful. Words cannot fully capture my gratitude for your passion, creativity, and belief in building something truly special.</p>
<p>I have often said that I want MongoDB to be an inflection point in people’s careers, a place where they can grow, take risks, and do the best work of their lives. I can say without hesitation that it has been exactly that for me. The skills I have developed, the experiences I have gained, and the relationships I have formed here have shaped me more than any other chapter in my professional life. I will carry them with me always, and will continue to cheer for and support MongoDB every step of the way.</p>
<p>--Dev</p>
]]></description>
      <pubDate>Mon, 03 Nov 2025 18:55:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/news/mongodb-announces-leadership-transition</link>
      <guid>https://www.mongodb.com/company/blog/news/mongodb-announces-leadership-transition</guid>
    </item><item>
      <title>Cars24 Improves Search For 300 Million Users With MongoDB Atlas</title>
      <description><![CDATA[<p>The Indian multinational online car marketplace Cars24 serves 300 million users globally.  The company offers services that span sales, insurance, maintenance, financing, and more, reshaping the entire car ownership journey.</p>
<p>Speaking at <a href="https://www.youtube.com/watch?v=zlR3wXzoa74&list=PL4RCxklHWZ9tfGnzDp49-tswA8hqsA70M&index=12" target="_blank">MongoDB .local Bengaluru in July 2025</a>, Pradeep Sharma, Head of Technology at Cars24, shared how MongoDB has been a key driver of Car24’s digital transformation journey. Specifically, he highlighted two recent use cases that show how <a href="https://www.mongodb.com/products/platform">MongoDB Atlas</a> has helped Cars24 scale, improve its search capabilities, and reduce its architectural complexity.</p>
<iframe width="800" height="425" src="https://www.youtube.com/embed/zlR3wXzoa74?si=sgjrzqbrS3wXHkHI" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
<h2>Matching the growing scale with simplified and expanded search</h2>
<p>Cars24 has operations in multiple countries, and a diverse customer base. Over the years, the company has used customer data, behavior analytics, and operational workflows to build, evolving from being a platform for buying and selling cars, to an end-to-end ecosystem, supported by a hub of interconnected systems.</p>
<p>At the start of its journey, Cars24 relied on legacy databases for managing and searching data, such as Postgres. Their relational database set-up would store information, synchronize the data to a separate “bolt-on” search engine (such as Elasticsearch), manually indexing it, and then querying the index.</p>
<p>While initially effective for a small application ecosystem, these processes became bottlenecked as the organization’s services grew. Multiple engineering teams piped data into a single search index, which often resulted in synchronization challenges and overwhelming administrative overhead.</p>
<p>Cars24 faced three core limitations with this setup:</p>
<ul>
<li>
<p><b>Lower developer productivity:</b> Exponential effort was spent maintaining pipelines and synchronizing procedures. Developers had little bandwidth for building business features or innovation.</p>
</li>
<li>
<p><b>Architectural complexity:</b> Ensuring data sync consistency required multiple pipelines and race logic. This led to inefficiencies in real-time dashboard updates for agents.</p>
</li>
<li>
<p><b>Operational overhead:</b> Maintaining separate systems for database and search—alongside provisioning, patching, scaling, and monitoring—strained resources.</p>
</li>
</ul>
<p>Seeking an integrated approach, Cars24 embraced MongoDB Atlas, hosted on <a href="https://www.mongodb.com/resources/products/platform/mongodb-on-google-cloud">Google Cloud</a>. MongoDB Atlas would serve as a single, consistent, modern database and embedded search solution, powered by Apache Lucene.</p>
<p><a href="https://www.mongodb.com/products/platform/atlas-search">MongoDB Atlas Search</a> also enabled Cars24 to run queries directly in the database. This eliminated the need to synchronise data between systems while delivering real-time results.</p>
<p>This unified approach allowed the company’s developers to transition from managing complex synchronization mechanisms to building applications. Furthermore, the reduced administrative overhead enabled Cars24 to consolidate the team’s efforts, and to streamline query execution across the ecosystem.</p>
<p>Thanks to MongoDB Atlas and MongoDB Atlas Search, Cars24 was able to:</p>
<ul>
<li>
<p><b>Avoid &quot;synchronization tax”:</b> Switching to MongoDB Atlas eliminated the need for data synchronization and the additional tooling this mandated. Real-time searches can be performed from a single interface and workflow.</p>
</li>
<li>
<p><b>Deliver new search features faster:</b> By using a single, unified API across database and search operations, new features can be delivered rapidly.</p>
</li>
<li>
<p><b>Work with a fully managed platform:</b> With MongoDB Atlas, Cars24’s engineers can focus more on application development and building products, rather than thinking about managing indexes, syncing, and more.</p>
</li>
</ul>
<p>Following this successful migration, Cars24 decided to also use MongoDB Atlas to replace one of its legacy databases, ArangoDB. The switch to MongoDB Atlas eliminated major roadblocks for other critical search capabilities.</p>
<h2>From ArangoDB to MongoDB: Streamlined operations and 50% cost savings</h2>
<p>As Cars24 scaled new services globally, it encountered limitations with its geospatial search solution, which was based on ArangoDB. This included performance bottlenecks, weak transactions as it was difficult to guarantee consistent data operations, and a limited ecosystem which meant that  scaling developer onboarding and troubleshooting became increasingly onerous.<br>
Moving to MongoDB Atlas enabled Cars24 to transition its geospatial services, consolidating its data storage and search capabilities under a single, versatile platform.</p>
<p>“We now have a highly available architecture, and an amazing team at MongoDB that has our back,” said Sharma.</p>
<p>MongoDB offered a proven architecture for high availability, scalability, and real-world production readiness:</p>
<ul>
<li>
<p><b>Enhanced scalability:</b> MongoDB’s ability to scale massive workloads supports Cars24’s growing global presence.</p>
</li>
<li>
<p><b>Reliable transactions:</b> MongoDB provides robust multi-document ACID transactions across shards, meeting mission-critical needs.</p>
</li>
<li>
<p><b>Streamlined operations:</b> MongoDB offers a single platform that is not limited to a database only. By consolidating its geospatial search workload under MongoDB, Cars24 has reduced maintenance and operational overhead.</p>
</li>
</ul>
<p>Not only did Cars24 cut costs in half by moving to MongoDB, but the widespread market adoption of MongoDB Atlas also means that Cars24 can continue to rapidly onboard developers familiar with MongoDB, a recruiting priority for Cars24’s growing development team.</p>
<p>“To give you an idea, one of our business units had a developer team of less than 10 about a year ago. Now they are a triple-digit team,” said Sharma. “If we are going to keep introducing new developers, for a product coming up or scaling up, it becomes very important to focus on the community skills and support provided by our technology partner.”</p>
<p>“Now that we have moved from ArangoDB to MongoDB Atlas, our developers are the happiest,” he added.</p>
<p>Cars24 is now looking to consolidate even more of its application and data workflows under MongoDB Atlas. With the growing number of developers joining Cars24’s engineering teams, plans are to utilize MongoDB Atlas further to enhance productivity, scalability, and data-driven insights.</p>
<div class="callout">
<p><b>Visit the <a href="https://www.mongodb.com/resources/product/platform/atlas-learning-hub">MongoDB Atlas Learning Hub</a> to learn more about Atlas.</b></p>
<p><b>To learn more about MongoDB Atlas Search, visit our <a href="https://www.mongodb.com/products/platform/atlas-search">product page</a>.</b></p>
</div>	]]></description>
      <pubDate>Sun, 12 Oct 2025 23:00:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/innovation/cars24-improves-search-for-300-million-users-with-atlas</link>
      <guid>https://www.mongodb.com/company/blog/innovation/cars24-improves-search-for-300-million-users-with-atlas</guid>
    </item><item>
      <title>The Cost of Not Knowing MongoDB, Part 3: appV6R0 to appV6R4</title>
      <description><![CDATA[<p>Welcome to the third and final part of the series &quot;The Cost of Not Knowing MongoDB.&quot; Building upon the foundational optimizations explored in <a href="https://www.mongodb.com/developer/products/mongodb/cost-of-not-knowing-mongodb/">Part 1</a> and <a href="https://www.mongodb.com/developer/products/mongodb/cost-of-not-knowing-mongodb-part-2/">Part 2</a>, this article delves into advanced MongoDB design patterns that can dramatically transform application performance.</p>
<p>In Part 1, we improved application performance by concatenating fields, changing data types, and shortening field names. In Part 2, we implemented the <b><a href="https://www.mongodb.com/docs/manual/data-modeling/design-patterns/group-data/bucket-pattern/">Bucket Pattern</a></b> and <b><a href="https://www.mongodb.com/company/blog/building-with-patterns-the-computed-pattern">Computed Pattern</a></b> and optimized the aggregation pipeline to achieve even better performance.</p>
<p>In this final article, we address the <b>issues and improvements</b> identified in <b>appV5R4</b>. Specifically, we focus on reducing the document size in our application to alleviate the disk throughput bottleneck on the MongoDB server. This reduction will be accomplished by adopting a <b>dynamic schema</b> and modifying the storage compression algorithm.</p>
<p>All the application versions and revisions from this article were developed by a senior MongoDB developer, as they are built on all the previous versions and utilize the Dynamic Schema pattern, which isn't very common to see.</p>
<h2>Application version 6 revision 0 (appV6R0): A dynamic monthly bucket document</h2>
<p>As mentioned in the Issues and Improvements of appV5R4 from the <a href="https://www.mongodb.com/developer/products/mongodb/cost-of-not-knowing-mongodb-part-2/#application-version-5-revision-4--appv5r4---doubling-down-on-the-computed-pattern">previous article</a>, the primary limitation of our MongoDB server is its disk throughput. To address this, we need to reduce the size of the documents being stored.</p>
<p>Consider the following document from appV5R3, which has provided the best performance so far:</p>
<pre><code tabindex="0">const document = {&NewLine;  _id: Buffer.from(&quot;...01202202&quot;),&NewLine;  items: [&NewLine;    { date: new Date(&quot;2022-06-05&quot;), a: 10, n: 3 },&NewLine;    { date: new Date(&quot;2022-06-16&quot;), p: 1, r: 1 },&NewLine;    { date: new Date(&quot;2022-06-27&quot;), a: 5, r: 1 },&NewLine;    { date: new Date(&quot;2022-06-29&quot;), p: 1 },&NewLine;  ],&NewLine;};&NewLine;</code></pre>
<p>The items array in this document contains only four elements, but on average, it will have around 10 elements, and in the worst-case scenario, it could have up to 90 elements. These elements are the primary contributors to the document size, so they should be the focus of our optimization efforts.</p>
<p>One commonality among the elements is the presence of the date field, with its value including the year and month, for the previous document. By rethinking how this field and its value could be stored, we can reduce storage requirements.</p>
<p>An unconventional solution we could use is:</p>
<ul>
<li>
<p>Changing the items field type from an array to a document.</p>
</li>
<li>
<p>Using the date value as the field name in the items document.</p>
</li>
<li>
<p>Storing the status totals as the value for each date field.</p>
</li>
</ul>
<p>Here is the previous document represented using the new schema idea:</p>
<pre><code tabindex="0">const document = {&NewLine;  _id: Buffer.from(&quot;...01202202&quot;),&NewLine;  items: {&NewLine;    20220605: { a: 10, n: 3 },&NewLine;    20220616: { p: 1, r: 1 },&NewLine;    20220627: { a: 5, r: 1 },&NewLine;    20220629: { p: 1 },&NewLine;  },&NewLine;};&NewLine;</code></pre>
<p>While this schema may not significantly reduce the document size compared to appV5R3, we can further optimize it by leveraging the fact that the year is already embedded in the _id field. This eliminates the need to repeat the year in the field names of the items document.</p>
<p>With this approach, the items document adopts a Dynamic Schema, where field names encode information and are not predefined.</p>
<p>To demonstrate various implementation possibilities, we will revisit all the bucketing criteria used in the appV5RX implementations, starting with appV5R0.</p>
<p>For appV6R0, which builds upon appV5R0 but uses a dynamic schema, data is bucketed by year and month. The field names in the items document represent only the day of the date, as the year and month are already stored in the _id field.</p>
<p>A detailed explanation of the bucketing logic and functions used to implement the current application can be found in the <a href="https://www.mongodb.com/developer/products/mongodb/cost-of-not-knowing-mongodb-part-2/#application-version-5-revision-0-and-revision-1--appv5r0-and-appv5r1---a-simple-way-to-use-the-bucket-pattern">appV5R0 introduction</a>.</p>
<p>The following document stores data for January 2022 (2022-01-XX), applying the newly presented idea:</p>
<pre><code tabindex="0">const document = {&NewLine;  _id: Buffer.from(&quot;...01202201&quot;),&NewLine;  items: {&NewLine;    &quot;05&quot;: { a: 10, n: 3 },&NewLine;    16: { p: 1, r: 1 },&NewLine;    27: { a: 5, r: 1 },&NewLine;    29: { p: 1 },&NewLine;  },&NewLine;};&NewLine;</code></pre>
<h3>Schema</h3>
<p>The application implementation presented above would have the following TypeScript document schema denominated SchemaV6R0:</p>
<pre><code tabindex="0">export type SchemaV6R0 = {&NewLine;  _id: Buffer;&NewLine;  items: Record&lt;&NewLine;    string,&NewLine;    {&NewLine;      a?: number;&NewLine;      n?: number;&NewLine;      p?: number;&NewLine;      r?: number;&NewLine;    }&NewLine;  &gt;;&NewLine;};&NewLine;</code></pre>
<h3>Bulk upsert</h3>
<p>Based on the specification presented, we have the following updateOne operation for each event generated by this application version:</p>
<pre><code tabindex="0">const DD = getDD(event.date); // Extract the `day` from the `event.date`&NewLine;&NewLine;const operation = {&NewLine;  updateOne: {&NewLine;    filter: { _id: buildId(event.key, event.date) }, // key + year + month&NewLine;    update: {&NewLine;      $inc: {&NewLine;        [`items.${DD}.a`]: event.approved,&NewLine;        [`items.${DD}.n`]: event.noFunds,&NewLine;        [`items.${DD}.p`]: event.pending,&NewLine;        [`items.${DD}.r`]: event.rejected,&NewLine;      },&NewLine;    },&NewLine;    upsert: true,&NewLine;  },&NewLine;};&NewLine;</code></pre>
<p><b>filter:</b></p>
<ul>
<li>
<p>Target the document where the _id field matches the concatenated value of key, year, and month.</p>
</li>
<li>
<p>The buildId function converts the key+year+month into a binary format.</p>
</li>
</ul>
<p><b>update:</b></p>
<ul>
<li>
<p>Uses the <a href="https://www.mongodb.com/docs/manual/reference/operator/update/inc/">$inc</a> operator to increment the fields corresponding to the same DD as the event by the status values provided.</p>
</li>
<li>
<p>If a field does not exist in the items document and the event provides a value for it, $inc treats the non-existent field as having a value of 0 and performs the operation.</p>
</li>
<li>
<p>If a field exists in the items document but the event does not provide a value for it (i.e., undefined), $inc treats it as 0 and performs the operation.</p>
</li>
</ul>
<p><b>upsert:</b></p>
<ul>
<li><font size="4">Ensures a new document is created if no matching document exists.</font></li>
</ul>
<h3>Get reports</h3>
<p>To fulfill the Get Reports operation, five aggregation pipelines are required, one for each date interval. Each pipeline follows the same structure, differing only in the filtering criteria in the $match stage:</p>
<pre><code tabindex="0">const pipeline = [&NewLine;  { $match: docsFromKeyBetweenDate },&NewLine;  { $addFields: buildTotalsField },&NewLine;  { $group: groupSumTotals },&NewLine;  { $project: { _id: 0 } },&NewLine;];&NewLine;</code></pre>
<p>The complete code for this aggregation pipeline is quite complicated. Because of that, we will have just a pseudocode for it here.</p>
<p>1: <code tabindex="0">{ $match: docsFromKeyBetweenDate }</code></p>
<ul>
<li><font size="4">Range-filters documents by _id to retrieve only buckets within the report date range. It has the same logic as appV5R0.</font></li>
</ul>
<p>2: <code tabindex="0">{ $addFields: buildTotalsField }</code></p>
<ul>
<li>
<p>The logic is similar to the one used in the Get Reports of appV5R3.</p>
</li>
<li>
<p>The <a href="https://www.mongodb.com/docs/manual/reference/operator/aggregation/objectToArray/">$objectToArray</a> operator is used to convert the items document into an array, enabling a $reduce operation.</p>
</li>
<li>
<p>Filtering the items fields within the report's range involves extracting the year and month from the _id field and the day from the field names in the items document.</p>
</li>
<li>
<p>The following JavaScript code is logic equivalent to the real aggregation pipeline code.</p>
</li>
</ul>
<pre><code tabindex="0">// Equivalent JavaScript logic:&NewLine;const [MM] = _id.slice(-2).toString(); // Get month from _id&NewLine;const [YYYY] = _id.slice(-6, -2).toString(); // Get year from _id&NewLine;const items_array = Object.entries(items); // Convert the object to an array of [key, value]&NewLine;&NewLine;const totals = items_array.reduce(&NewLine;  (accumulator, [DD, status]) =&gt; {&NewLine;    let statusDate = new Date(`${YYYY}-${MM}-${DD}`);&NewLine;&NewLine;    if (statusDate &gt;= reportStartDate &amp;&amp; statusDate &lt; reportEndDate) {&NewLine;      accumulator.a += status.a || 0;&NewLine;      accumulator.n += status.n || 0;&NewLine;      accumulator.p += status.p || 0;&NewLine;      accumulator.r += status.r || 0;&NewLine;    }&NewLine;&NewLine;    return accumulator;&NewLine;  },&NewLine;  { a: 0, n: 0, p: 0, r: 0 }&NewLine;);&NewLine;</code></pre>
<p>3: <code tabindex="0">{ $group: groupCountTotals }</code></p>
<ul>
<li><font size="4">Group the totals of each document in the pipeline into final status totals using $sum operations.</font></li>
</ul>
<p>4: <code tabindex="0">{ $project: { _id: 0 } }</code></p>
<ul>
<li><font size="4">Format the resulting document to have the reports format.</font></li>
</ul>
<h3>Indexes</h3>
<p>No additional indexes are required, maintaining the single _id index approach established in the appV4 implementation.</p>
<h3>Initial scenario statistics</h3>
<h4>Collection statistics</h4>
<p>To evaluate the performance of appV6R0, we inserted 500 million event documents into the collection using the schema and Bulk Upsert function described earlier. For comparison, the tables below also include statistics from previous comparable application versions:</p>
<html>
<head>
     <style>
    table,
    th,
    td {
        border: 1px solid black;
        border-collapse: collapse;
    }
    th,
    td {
        padding: 5px;
    }
    </style>
</head>
<body>
    <table style="width:100%">
        <tr>
            <th>Collection</th>
            <th>Documents</th>
					<th>Data Size</th>
					<th>Document Size</th>
					<th>Storage Size</th>
					<th>Indexes</th>
					<th>Index Size</th>
        </tr>
        <tr>
					<td>appV5R0</td>
					<td>95,350,431</td>
					<td>19.19GB</td>
					<td>217B</td>
					<td>5.06GB</td>
					<td>1</td>
					<td>2.95GB</td>
        </tr>
			<tr>
				<td>appV5R3</td>
				<td>33,429,492</td>
				<td>11.96GB</td>
				<td>385B</td>
				<td>3.24GB</td>
				<td>1</td>
				<td>1.11GB</td>
			</tr>
			<tr>
				<td>appV6R0</td>
				<td>95,350,319</td>
				<td>11.1GB</td>
				<td>125B</td>
				<td>3.33GB</td>
				<td>1</td>
				<td>3.13GB</td>
    </table>
</body>
</html>
<h4>Event statistics</h4>
<p>To evaluate the storage efficiency per event, the Event Statistics are calculated by dividing the total data size and index size by the 500 million events.</p>
<html>
<head>
     <style>
    table,
    th,
    td {
        border: 1px solid black;
        border-collapse: collapse;
    }
    th,
    td {
        padding: 5px;
    }
    </style>
</head>
<body>
    <table style="width:100%">
        <tr>
            <th>Collection</th>
            <th>Data Size/Events</th>
					<th>Index Size/Events</th>
					<th>Total Size/Events</th>
        </tr>
        <tr>
					<td>appV5R0</td>
					<td>41.2B</td>
					<td>6.3B</td>
					<td>47.5B</td>
        </tr>
			<tr>
				<td>appV5R3</td>
				<td>25.7B</td>
				<td>2.4B</td>
				<td>28.1B</td>
			</tr>
			<tr>
				<td>appV6R0</td>
				<td>23.8B</td>
				<td>6.7B</td>
				<td>30.5B</td>
			</tr>
    </table>
</body>
</html>
<p>It is challenging to make a direct comparison between appV6R0 and appV5R0 from a storage perspective. The appV5R0 implementation is the simplest bucketing possible, where event documents were merely appended to the items array without bucketing by day, as is done in appV6R0.</p>
<p>However, we can attempt a comparison between appV6R0 and appV5R3, the best solution so far. In appV6R0, data is bucketed by month, whereas in appV5R3, it is bucketed by quarter. Assuming document size scales linearly with the bucketing criteria (though this is not entirely accurate), the appV6R0 document would be approximately 3 * 125 = 375 bytes, which is 9.4% smaller than appV5R3.</p>
<p>Another indicator of improvement is the Data Size/Events metric in the Event Statistics table. For appV6R0, each event uses an average of 23.8 bytes, compared to 27.7 bytes for appV5R3, representing a 14.1% reduction in size.</p>
<h3>Load test results</h3>
<p>Executing the load test for appV6R0 and plotting it alongside the results for appV5R0 and Desired rates, we have the following results for Get Reports and Bulk Upsert.</p>
<h4>Get Reports rates</h4>
<p>The two versions exhibit very similar rate performance, with appV6R0 showing slight superiority in the second and third quarters, while appV5R0 is superior in the first and fourth quarters.</p>
<center><caption><b>Figure 1.</b> Graph showing the rates of appV5R0 and appV6R0 when executing the load test for Get Reports functionality. Both have similar performance, but without reaching the desired rates.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-10-08 at 11.49.49 AM-vzb1w73m29.png" alt="Graph showing the rates of appV5R0 and appV6R0 when executing the load test for Get Reports functionality. Both have similar performance, but without reaching the desired rates." title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<h4>Get Reports latency</h4>
<p>The two versions exhibit very similar latency performance, with appV6R0 showing slight advantages in the second and third quarters, while appV5R0 is superior in the first and fourth quarters.</p>
<center><caption><b>Figure 2.</b> Graph showing the latency of appV5R0 and appV6R0 when executing the load test for Get Reports functionality. appV5R0 has lower latency than appV6R0.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-10-08 at 11.52.08 AM-tcgwfvap9f.png" alt="Graph showing the latency of appV5R0 and appV6R0 when executing the load test for Get Reports functionality. appV5R0 has lower latency than appV6R0. " title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<h4>Bulk Upsert rates</h4>
<p>Both versions have similar rate values, but it can be seen that appV6R0 has a small edge compared to appV5R0.</p>
<center><caption><b>Figure 3.</b> Graph showing the rates of appV5R0 and appV6R0 when executing the load test for Bulk Upsert functionality. appV6R0 has better rates than appV5R0, but without reaching the desired rates.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-10-08 at 12.09.53 PM-rsx3uaz6xn.png" alt="Graph showing the rates of appV5R0 and appV6R0 when executing the load test for Bulk Upsert functionality. appV6R0 has better rates than appV5R0, but without reaching the desired rates. " title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<h4>Bulk Upsert latency</h4>
<p>Although both versions have similar latency values for the first quarter of the test, for the final three-quarters, appV6R0 has a clear advantage over appV5R0.</p>
<center><caption><b>Figure 4.</b> Graph showing the latency of appV5R0 and appV6R0 when executing the load test for Bulk Upsert functionality. appV6R0 has lower latency than appV5R0.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-10-08 at 12.11.15 PM-gnmg26z6bk.png" alt="Graph showing the latency of appV5R0 and appV6R0 when executing the load test for Bulk Upsert functionality. appV6R0 has lower latency than appV5R0" title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<h4>Performance summary</h4>
<p>Despite the significant reduction in document and storage size achieved by appV6R0, the performance improvement was not as substantial as expected. This suggests that the bottleneck in the application when bucketing data by month may not be related to disk throughput.</p>
<p>Examining the collection stats table reveals that the index size for both versions is close to 3GB. This is near the 4GB of available memory on the machine running the database and exceeds the <a href="https://www.mongodb.com/docs/manual/core/wiredtiger/#memory-use">1.5GB allocated by WiredTiger for cache</a>. Therefore, it is likely that the limiting factor in this case is memory/cache rather than document size, which explains the lack of a significant performance improvement.</p>
<h3>Issues and improvements</h3>
<p>To address the limitations observed in appV6R0, we propose adopting the same line of improvements applied from appV5R0 to appV5R1. Specifically, we will bucket the events by quarter in appV6R1. This approach not only follows the established pattern of enhancements but also aligns with the need to optimize performance further.</p>
<p>As highlighted in the Load Test Results, the current bottleneck lies in the size of the index relative to the available cache/memory. By increasing the bucketing interval from month to quarter, we can reduce the number of documents by approximately a factor of three. This reduction will, in turn, decrease the number of index entries by the same factor, leading to a smaller index size.</p>
<h2>Application version 6 revision 1 (appV6R1): A dynamic quarter bucket document</h2>
<p>As discussed in the previous Issues and Improvements section, the primary bottleneck in appV6R0 was the index size nearing the memory capacity of the machine running MongoDB. To mitigate this issue, we propose increasing the bucketing interval from a month to a quarter for appV6R1, following the approach used in appV5R1.</p>
<p>This adjustment aims to reduce the number of documents and index entries by approximately a factor of three, thereby decreasing the overall index size. By adopting a quarter-based bucketing strategy, we align with the established pattern of enhancements applied in appV5R1 versions while addressing the specific memory/cache constraints identified in appV6R0.</p>
<p>The implementation of appV6R1 retains most of the code from appV6R0, with the following key differences:</p>
<ul>
<li>
<p>The _id field will now be composed of key+year+quarter.</p>
</li>
<li>
<p>The field names in the items document will encode both month and day, as this information is necessary for filtering date ranges in the Get Reports operation.</p>
</li>
</ul>
<p>The following example demonstrates how data for June 2022 (2022-06-XX), within the second quarter (Q2), is stored using the new schema:</p>
<pre><code tabindex="0">const document = {&NewLine;  _id: Buffer.from(&quot;...01202202&quot;),&NewLine;  items: {&NewLine;    &quot;0605&quot;: { a: 10, n: 3 },&NewLine;    &quot;0616&quot;: { p: 1, r: 1 },&NewLine;    &quot;0627&quot;: { a: 5, r: 1 },&NewLine;    &quot;0629&quot;: { p: 1 },&NewLine;  },&NewLine;};&NewLine;</code></pre>
<h3>Schema</h3>
<p>The application implementation presented above would have the following TypeScript document schema denominated SchemaV6R0:</p>
<pre><code tabindex="0">export type SchemaV6R0 = {&NewLine;  _id: Buffer;&NewLine;  items: Record&lt;&NewLine;    string,&NewLine;    {&NewLine;      a?: number;&NewLine;      n?: number;&NewLine;      p?: number;&NewLine;      r?: number;&NewLine;    }&NewLine;  &gt;;&NewLine;};&NewLine;</code></pre>
<h3>Bulk upsert</h3>
<p>Based on the specification presented, we have the following updateOne operation for each event generated by this application version:</p>
<pre><code tabindex="0">const MMDD = getMMDD(event.date); // Extract the month (MM) and day(DD) from the `event.date`&NewLine;&NewLine;const operation = {&NewLine;  updateOne: {&NewLine;    filter: { _id: buildId(event.key, event.date) }, // key + year + quarter&NewLine;    update: {&NewLine;      $inc: {&NewLine;        [`items.${MMDD}.a`]: event.approved,&NewLine;        [`items.${MMDD}.n`]: event.noFunds,&NewLine;        [`items.${MMDD}.p`]: event.pending,&NewLine;        [`items.${MMDD}.r`]: event.rejected,&NewLine;      },&NewLine;    },&NewLine;    upsert: true,&NewLine;  },&NewLine;};&NewLine;</code></pre>
<p>This updateOne operation has a similar logic to the one in appV6R0, with the only differences being the filter and update criteria.</p>
<p><b>filter:</b></p>
<ul>
<li>
<p>Target the document where the _id field matches the concatenated value of key, year, and quarter.</p>
</li>
<li>
<p>The buildId function converts the key+year+quarter into a binary format.</p>
</li>
</ul>
<p><b>update:</b></p>
<ul>
<li><font size="4">Uses the $inc operator to increment the fields corresponding to the same MMDD as the event by the status values provided.</font></li>
</ul>
<h3>Get reports</h3>
<p>To fulfill the Get Reports operation, five aggregation pipelines are required, one for each date interval. Each pipeline follows the same structure, differing only in the filtering criteria in the $match stage:</p>
<pre><code tabindex="0">const pipeline = [&NewLine;  { $match: docsFromKeyBetweenDate },&NewLine;  { $addFields: buildTotalsField },&NewLine;  { $group: groupSumTotals },&NewLine;  { $project: { _id: 0 } },&NewLine;];&NewLine;</code></pre>
<p>This aggregation operation has a similar logic to the one in appV6R0, with the only differences being the implementation in the $addFields stage.</p>
<p><code tabindex="0">{ $addFields: itemsReduceAccumulator }:</code></p>
<ul>
<li>
<p>A similar implementation to the one in appV6R0</p>
</li>
<li>
<p>The difference relies on extracting the value of year (YYYY) from the _id field and the month and day (MMDD) from the field name.</p>
</li>
<li>
<p>The following JavaScript code is logic equivalent to the real aggregation pipeline code.</p>
</li>
</ul>
<pre><code tabindex="0">const [YYYY] = _id.slice(-6, -2).toString(); // Get year from _id&NewLine;const items_array = Object.entries(items); // Convert the object to an array of [key, value]&NewLine;&NewLine;const totals = items_array.reduce(&NewLine;  (accumulator, [MMDD, status]) =&gt; {&NewLine;    let [MM, DD] = [MMDD.slice(0, 2), MMDD.slice(2, 4)];&NewLine;    let statusDate = new Date(`${YYYY}-${MM}-${DD}`);&NewLine;&NewLine;    if (statusDate &gt;= reportStartDate &amp;&amp; statusDate &lt; reportEndDate) {&NewLine;      accumulator.a += status.a || 0;&NewLine;      accumulator.n += status.n || 0;&NewLine;      accumulator.p += status.p || 0;&NewLine;      accumulator.r += status.r || 0;&NewLine;    }&NewLine;&NewLine;    return accumulator;&NewLine;  },&NewLine;  { a: 0, n: 0, p: 0, r: 0 }&NewLine;);&NewLine;</code></pre>
<h3>Indexes</h3>
<p>No additional indexes are required, maintaining the single _id index approach established in the appV4 implementation.</p>
<h3>Initial scenario statistics</h3>
<h4>Collection statistics</h4>
<p>To evaluate the performance of appV6R1, we inserted 500 million event documents into the collection using the schema and Bulk Upsert function described earlier. For comparison, the tables below also include statistics from previous comparable application versions:</p>
<html>
<head>
     <style>
    table,
    th,
    td {
        border: 1px solid black;
        border-collapse: collapse;
    }
    th,
    td {
        padding: 5px;
    }
    </style>
</head>
<body>
    <table style="width:100%">
        <tr>
            <th>Collection</th>
            <th>Documents</th>
					<th>Data Size</th>
					<th>Document Size</th>
					<th>Storage Size</th>
					<th>Indexes</th>
					<th>Index Size</th>
        </tr>
        <tr>
					<td>appV5R3</td>
					<td>33,429,492</td>
					<td>11.96GB</td>
					<td>385B</td>
					<td>3.24GB</td>
					<td>1</td>
					<td>1.11GB</td>
        </tr>
			<tr>
				<td>appV6R0</td>
				<td>95,350,319</td>
				<td>11.1GB</td>
				<td>125B</td>
				<td>3.33GB</td>
				<td>1</td>
				<td>3.13GB</td>
			</tr>
			<tr>
				<td>appV6R1</td>
				<td>33,429,366</td>
				<td>8.19GB</td>
				<td>264B</td>
				<td>2.34GB</td>
				<td>1</td>
				<td>1.22GB</td>
			</tr>
    </table>
</body>
</html>
<h4>Event statistics</h4>
<p>To evaluate the storage efficiency per event, the Event Statistics are calculated by dividing the total data size and index size by the 500 million events.</p>
<html>
<head>
     <style>
    table,
    th,
    td {
        border: 1px solid black;
        border-collapse: collapse;
    }
    th,
    td {
        padding: 5px;
    }
    </style>
</head>
<body>
    <table style="width:100%">
        <tr>
            <th>Collection</th>
            <th>Data Size/Events</th>
					<th>Index Size/Events</th>
					<th>Total Size/Events</th>
        </tr>
        <tr>
					<td>appV5R3</td>
					<td>25.7B</td>
					<td>2.4B</td>
					<td>28.1B</td>
        </tr>
			<tr>
				<td>appV6R0</td>
				<td>23.8B</td>
				<td>6.7B</td>
				<td>30.5B</td>
			</tr>
			<tr>
				<td>appV6R1</td>
				<td>17.6B</td>
				<td>2.6B</td>
				<td>20.2B</td>
			</tr>
    </table>
</body>
</html>
<p>In the previous Initial Scenario Statistics analysis, we assumed that document size would scale linearly with the bucketing range. However, this assumption proved inaccurate. The average document size in appV6R1 is approximately twice as large as in appV6R0, even though it stores three times more data. Already a win for this new implementation.</p>
<p>Since appV6R1 buckets data by quarter at the document level and by day within the items sub-document, a fair comparison would be with appV5R3, the best-performing version so far. From the tables above, we observe a significant improvement in Document Size and consequently Data Size when transitioning from appV5R3 to appV6R1. Specifically, there was a 31.4% reduction in Document Size. From an index size perspective, there was no change, as both versions bucket events by quarter.</p>
<h3>Load test results</h3>
<p>Executing the load test for appV6R0 and plotting it alongside the results for appV5R0 and Desired rates, we have the following results for Get Reports and Bulk Upsert.</p>
<h4>Get Reports rates</h4>
<p>For the first three-quarters of the test, both versions have similar rate values, but, for the final quarter, appV6R1 has a notable edge over appV5R3.</p>
<center><caption><b>Figure 5.</b> Graph showing the rates of appV5R3 and appV6R1 when executing the load test for Get Reports functionality. appV5R3 has better rates than appV6R1, but without reaching the desired rates.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-10-08 at 12.41.58 PM-uuprlz3n1y.png" alt="Graph showing the rates of appV5R3 and appV6R1 when executing the load test for Get Reports functionality. appV5R3 has better rates than appV6R1, but without reaching the desired rates." title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<h4>Get Reports latency</h4>
<p>The two versions exhibit very similar latency performance, with appV6R0 showing slight advantages in the second and third quarters, while appV5R0 is superior in the first and fourth quarters.</p>
<center><caption><b>Figure 6.</b> Graph showing the latency of appV5R0 and appV6R0 when executing the load test for Get Reports functionality. appV5R0 has lower latency than appV6R0.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-10-08 at 12.44.56 PM-x60k3nk98v.png" alt="Graph showing the latency of appV5R0 and appV6R0 when executing the load test for Get Reports functionality. appV5R0 has lower latency than appV6R0. " title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<h4>Bulk Upsert rates</h4>
<p>Both versions have similar rate values, but it can be seen that appV6R0 has a small edge compared to appV5R0.</p>
<center><caption><b>Figure 7.</b> Graph showing the rates of appV5R0 and appV6R0 when executing the load test for Bulk Upsert functionality. appV6R0 has better rates than appV5R0, but without reaching the desired rates.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-10-08 at 12.47.10 PM-0pl18wvhs1.png" alt="Graph showing the rates of appV5R0 and appV6R0 when executing the load test for Bulk Upsert functionality. appV6R0 has better rates than appV5R0, but without reaching the desired rates. " title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<h4>Bulk Upsert latency</h4>
<p>Although both versions have similar latency values for the first quarter of the test, for the final three-quarters, appV6R0 has a clear advantage over appV5R0.</p>
<center><caption><b>Figure 8.</b> Graph showing the latency of appV5R0 and appV6R0 when executing the load test for Bulk Upsert functionality. appV6R0 has lower latency than appV5R0.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-10-08 at 12.49.40 PM-zg75zchnty.png" alt="Graph showing the latency of appV5R0 and appV6R0 when executing the load test for Bulk Upsert functionality. appV6R0 has lower latency than appV5R0. " title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<h4>Performance summary</h4>
<p>Despite the significant reduction in document and storage size achieved by appV6R0, the performance improvement was not as substantial as expected. This suggests that the bottleneck in the application when bucketing data by month may not be related to disk throughput.</p>
<p>Examining the collection stats table reveals that the index size for both versions is close to 3GB. This is near the 4GB of available memory on the machine running the database and exceeds the <a href="https://www.mongodb.com/docs/manual/core/wiredtiger/#memory-use">1.5GB allocated by WiredTiger for cache</a>. Therefore, it is likely that the limiting factor in this case is memory/cache rather than document size, which explains the lack of a significant performance improvement.</p>
<h3>Issues and improvements</h3>
<p>To address the limitations observed in appV6R0, we propose adopting the same line of improvements applied from appV5R0 to appV5R1. Specifically, we will bucket the events by quarter in appV6R1. This approach not only follows the established pattern of enhancements but also aligns with the need to optimize performance further.</p>
<p>As highlighted in the Load Test Results, the current bottleneck lies in the size of the index relative to the available cache/memory. By increasing the bucketing interval from month to quarter, we can reduce the number of documents by approximately a factor of three. This reduction will, in turn, decrease the number of index entries by the same factor, leading to a smaller index size.</p>
<h2>Application version 6 revision 1 (appV6R1): A dynamic quarter bucket document</h2>
<p>As discussed in the previous Issues and Improvements section, the primary bottleneck in appV6R0 was the index size nearing the memory capacity of the machine running MongoDB. To mitigate this issue, we propose increasing the bucketing interval from a month to a quarter for appV6R1, following the approach used in appV5R1.</p>
<p>This adjustment aims to reduce the number of documents and index entries by approximately a factor of three, thereby decreasing the overall index size. By adopting a quarter-based bucketing strategy, we align with the established pattern of enhancements applied in appV5R1 versions while addressing the specific memory/cache constraints identified in appV6R0.</p>
<p>The implementation of appV6R1 retains most of the code from appV6R0, with the following key differences:</p>
<ul>
<li>
<p>The _id field will now be composed of key+year+quarter.</p>
</li>
<li>
<p>The field names in the items document will encode both month and day, as this information is necessary for filtering date ranges in the Get Reports operation.</p>
</li>
</ul>
<p>The following example demonstrates how data for June 2022 (2022-06-XX), within the second quarter (Q2), is stored using the new schema:</p>
<pre><code tabindex="0">const document = {&NewLine;  _id: Buffer.from(&quot;...01202202&quot;),&NewLine;  items: {&NewLine;    &quot;0605&quot;: { a: 10, n: 3 },&NewLine;    &quot;0616&quot;: { p: 1, r: 1 },&NewLine;    &quot;0627&quot;: { a: 5, r: 1 },&NewLine;    &quot;0629&quot;: { p: 1 },&NewLine;  },&NewLine;};&NewLine;</code></pre>
<h3>Schema</h3>
<p>The application implementation presented above would have the following TypeScript document schema denominated SchemaV6R0:</p>
<pre><code tabindex="0">export type SchemaV6R0 = {&NewLine;  _id: Buffer;&NewLine;  items: Record&lt;&NewLine;    string,&NewLine;    {&NewLine;      a?: number;&NewLine;      n?: number;&NewLine;      p?: number;&NewLine;      r?: number;&NewLine;    }&NewLine;  &gt;;&NewLine;};&NewLine;</code></pre>
<h3>Bulk upsert</h3>
<p>Based on the specification presented, we have the following updateOne operation for each event generated by this application version:</p>
<pre><code tabindex="0">const MMDD = getMMDD(event.date); // Extract the month (MM) and day(DD) from the `event.date`&NewLine;&NewLine;const operation = {&NewLine;  updateOne: {&NewLine;    filter: { _id: buildId(event.key, event.date) }, // key + year + quarter&NewLine;    update: {&NewLine;      $inc: {&NewLine;        [`items.${MMDD}.a`]: event.approved,&NewLine;        [`items.${MMDD}.n`]: event.noFunds,&NewLine;        [`items.${MMDD}.p`]: event.pending,&NewLine;        [`items.${MMDD}.r`]: event.rejected,&NewLine;      },&NewLine;    },&NewLine;    upsert: true,&NewLine;  },&NewLine;};&NewLine;</code></pre>
<p>This updateOne operation has a similar logic to the one in appV6R0, with the only differences being the filter and update criteria.</p>
<p><b>filter:</b></p>
<ul>
<li>
<p>Target the document where the _id field matches the concatenated value of key, year, and quarter.</p>
</li>
<li>
<p>The buildId function converts the key+year+quarter into a binary format.</p>
</li>
</ul>
<p><b>update:</b></p>
<ul>
<li>Uses the $inc operator to increment the fields corresponding to the same MMDD as the event by the status values provided.</li>
</ul>
<h3>Get reports</h3>
<p>To fulfill the Get Reports operation, five aggregation pipelines are required, one for each date interval. Each pipeline follows the same structure, differing only in the filtering criteria in the $match stage:</p>
<pre><code tabindex="0">const pipeline = [&NewLine;  { $match: docsFromKeyBetweenDate },&NewLine;  { $addFields: buildTotalsField },&NewLine;  { $group: groupSumTotals },&NewLine;  { $project: { _id: 0 } },&NewLine;];&NewLine;</code></pre>
<p>This aggregation operation has a similar logic to the one in appV6R0, with the only differences being the implementation in the $addFields stage.</p>
<p><code tabindex="0">{ $addFields: itemsReduceAccumulator }:</code></p>
<ul>
<li>
<p>A similar implementation to the one in appV6R0</p>
</li>
<li>
<p>The difference relies on extracting the value of year (YYYY) from the _id field and the month and day (MMDD) from the field name.</p>
</li>
<li>
<p>The following JavaScript code is logic equivalent to the real aggregation pipeline code.</p>
</li>
</ul>
<pre><code tabindex="0">const [YYYY] = _id.slice(-6, -2).toString(); // Get year from _id&NewLine;const items_array = Object.entries(items); // Convert the object to an array of [key, value]&NewLine;&NewLine;const totals = items_array.reduce(&NewLine;  (accumulator, [MMDD, status]) =&gt; {&NewLine;    let [MM, DD] = [MMDD.slice(0, 2), MMDD.slice(2, 4)];&NewLine;    let statusDate = new Date(`${YYYY}-${MM}-${DD}`);&NewLine;&NewLine;    if (statusDate &gt;= reportStartDate &amp;&amp; statusDate &lt; reportEndDate) {&NewLine;      accumulator.a += status.a || 0;&NewLine;      accumulator.n += status.n || 0;&NewLine;      accumulator.p += status.p || 0;&NewLine;      accumulator.r += status.r || 0;&NewLine;    }&NewLine;&NewLine;    return accumulator;&NewLine;  },&NewLine;  { a: 0, n: 0, p: 0, r: 0 }&NewLine;);&NewLine;</code></pre>
<h3>Indexes</h3>
<p>No additional indexes are required, maintaining the single _id index approach established in the appV4 implementation.</p>
<h3>Initial scenario statistics</h3>
<h4>Collection statistics</h4>
<p>To evaluate the performance of appV6R1, we inserted 500 million event documents into the collection using the schema and Bulk Upsert function described earlier. For comparison, the tables below also include statistics from previous comparable application versions:</p>
<html>
<head>
     <style>
    table,
    th,
    td {
        border: 1px solid black;
        border-collapse: collapse;
    }
    th,
    td {
        padding: 5px;
    }
    </style>
</head>
<body>
    <table style="width:100%">
        <tr>
            <th>Collection</th>
            <th>Documents</th>
					<th>Data Size</th>
					<th>Document Size</th>
					<th>Storage Size</th>
					<th>Indexes</th>
					<th>Index Size</th>
        </tr>
        <tr>
					<td>appV5R3</td>
					<td>33,429,492</td>
					<td>11.96GB</td>
					<td>11.96GB</td>
					<td>3.24GB</td>
					<td>1</td>
					<td>1.11GB</td>
        </tr>
			 <tr>
					<td>appV6R1</td>
					<td>33,429,366</td>
					<td>8.19GB</td>
					<td>264B</td>
					<td>2.34GB</td>
					<td>1</td>
					<td>1.22GB</td>
        </tr>
			<tr>
					<td>appV6R2</td>
					<td>33,429,207</td>
					<td>9.11GB</td>
					<td>293B</td>
					<td>2.8GB</td>
					<td>1</td>
					<td>1.26GB</td>
        </tr>
    </table>
</body>
</html>
<h4>Event statistics</h4>
<p>To evaluate the storage efficiency per event, the Event Statistics are calculated by dividing the total data size and index size by the 500 million events.</p>
<head>
     <style>
    table,
    th,
    td {
        border: 1px solid black;
        border-collapse: collapse;
    }
    th,
    td {
        padding: 5px;
    }
    </style>
</head>
<body>
    <table style="width:100%">
        <tr>
            <th>Collection</th>
            <th>Data Size/Events</th>
					<th>Index Size/Events</th>
					<th>Total Size/Events</th>
			</tr>
			<tr>
				<td>appV5R3</td>
				<td>25.7B</td>
				<td>2.4B</td>
				<td>28.1B</td>
			</tr>
			<tr>
				<td>appV6R1</td>
				<td>17.6B</td>
				<td>2.6B</td>
				<td>20.2B</td>
			</tr>
			<tr>
				<td>appV6R2</td>
				<td>19.6B</td>
				<td>2.7B</td>
				<td>22.3B</td>
			</tr>
	</table>
	</body>
	</html>
<p>As expected, we had an 11.2% increase in the Document Size by adding a totals field in each document of appV6R2. When comparing to appV5R3, we still have a reduction of 23.9% in the Document Size. Let's review the Load Test Results to see if the trade-off between storage and computation cost is worthwhile.</p>
<h3>Load test results</h3>
<p>Executing the load test for appV6R2 and plotting it alongside the results for appV6R1 and Desired rates, we have the following results for Get Reports and Bulk Upsert.</p>
<h4>Get Reports rates</h4>
<p>We can see that appV6R2 has better rates than appV6R1 throughout the test, but it’s still not reaching the top rate of 250 reports per second.</p>
<center><caption><b>Figure 9.</b> Graph showing the rates of appV6R1 and appV6R2 when executing the load test for Get Reports functionality. appV6R2 has better rates than appV6R1, but without reaching the desired rates.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-10-08 at 1.23.52 PM-91zq1baq1t.png" alt="Graph showing the rates of appV6R1 and appV6R2 when executing the load test for Get Reports functionality. appV6R2 has better rates than appV6R1, but without reaching the desired rates." title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<h4>Get Reports latency</h4>
<p>As shown in the rates graph, appV6R2 consistently provides lower latency than appV6R1 throughout the test.</p>
<center><caption><b>Figure 10.</b> Graph showing the latency of appV6R1 and appV6R2 when executing the load test for Get Reports functionality. appV6R2 has lower latency than appV6R1.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-10-08 at 1.25.32 PM-gpfjrmosjv.png" alt="Graph showing the latency of appV6R1 and appV6R2 when executing the load test for Get Reports functionality. appV6R2 has lower latency than appV6R1." title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<h4>Bulk Upsert rates</h4>
<p>Both versions exhibit very similar rate values throughout the test, with appV6R2 performing slightly better than appV6R1 in the final 20 minutes, yet still failing to reach the desired rate.</p>
<center><caption><b>Figure 11.</b> Graph showing the rates of appV6R1 and appV6R2 when executing the load test for Bulk Upsert functionality. appV6R2 has better rates than appV6R1, almost reaching the desired rates.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-10-08 at 1.27.07 PM-gs2myt8s27.png" alt="Graph showing the rates of appV6R1 and appV6R2 when executing the load test for Bulk Upsert functionality. appV6R2 has better rates than appV6R1, almost reaching the desired rates. " title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<h4>Bulk Upsert latency</h4>
<p>Although appV6R2 had better rate values than appV6R1, their latency performance is not conclusive, with appV6R2 being superior in the first and final quarters and appV6R1 in the second and third quarters.</p>
<center><caption><b>Figure 12.</b> Graph showing the latency of appV6R1 and appV6R2 when executing the load test for Bulk Upsert functionality. Both versions have similar latencies.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-10-08 at 1.28.54 PM-im3qzf1xvd.png" alt="Graph showing the latency of appV6R1 and appV6R2 when executing the load test for Bulk Upsert functionality. Both versions have similar latencies." title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<h4>Performance summary</h4>
<p>The two &quot;maybes&quot; from the previous Issues and Improvements made up for their promises, and we got the best performance for appV6R2 when comparing to appV6R1. This is the redemption of the Computed Pattern applied on a document level. This revision is one of my favorites because it shows that the same optimization on very similar applications can lead to different results. In our case, the difference was caused by the application being very bottlenecked by the disk throughput.</p>
<h3>Issues and improvements</h3>
<p>Let's tackle the last improvement on an application level. Those paying close attention to the application versions may have already questioned it. In every Get Reports section, we have &quot;To fulfill the Get Reports operation, five aggregation pipelines are required, one for each date interval.&quot; Do we really need to run five aggregation pipelines to generate the reports document? Isn't there a way to calculate everything in just one operation? The answer is yes, there is.</p>
<p>The reports documents are composed of fields oneYear, threeYears, fiveYears, sevenYears, and tenYears, where each one was generated by its respective aggregation pipeline until now. Generating the reports this way is a waste of processing power because we are doing some part of the calculation multiple times. For example, to calculate the status totals for tenYears, we will also have to calculate the status totals for the other fields, as from a date range perspective, they are all contained in the tenYears date range.</p>
<p>So, for our next application revision, we'll condense the Get Reports five aggregation pipelines into one, avoiding wasting processing power on repeated calculation.</p>
<h2>Application version 6 revision 3 (appV6R3): Getting everything at once</h2>
<p>As discussed in the previous Issues and Improvements section, in this revision, we'll improve the performance of our application by changing the Get Reports functionality to generate the reports document using only one aggregation pipeline instead of five.</p>
<p>The rationale behind this improvement is that when we generate the tenYears totals, we have also calculated the other totals, oneYear, threeYears, fiveYears, and sevenYears. As an example, when we request to Get Reports with the key ...0001 with the date 2022-01-01, the totals will be calculated with the following date range:</p>
<ul>
<li>
<p>oneYear: from 2021-01-01 to 2022-01-01</p>
</li>
<li>
<p>threeYears: from 2020-01-01 to 2022-01-01</p>
</li>
<li>
<p>fiveYears: from 2018-01-01 to 2022-01-01</p>
</li>
<li>
<p>sevenYears: from 2016-01-01 to 2022-01-01</p>
</li>
<li>
<p>tenYear: from 2013-01-01 to 2022-01-01</p>
</li>
</ul>
<p>As we can see from the list above, the date range for tenYears encompasses all the other date ranges.</p>
<p>Although we successfully implemented the Computed Pattern in the previous revision, appV6R2, achieving better results than appV6R1, we will not use it as a base for this revision. There were two reasons for that:</p>
<ol>
<li>
<p>Based on the results of our previous implementation of the Computed Pattern on a document level, from appV5R3 to appV5R4, I didn't expect it to get better results.</p>
</li>
<li>
<p>Implementing Get Reports to retrieve the reports document through a single aggregation pipeline, utilizing pre-computed field totals generated by the Computed Pattern would require significant effort. By the time of the latest versions of this series, I just wanted to finish it.</p>
</li>
</ol>
<p>So, this revision will be built based on the appV6R1.</p>
<h3>Schema</h3>
<p>The application implementation presented above would have the following TypeScript document schema denominated SchemaV6R0:</p>
<pre><code tabindex="0">export type SchemaV6R0 = {&NewLine;  _id: Buffer;&NewLine;  items: Record&lt;&NewLine;    string,&NewLine;    {&NewLine;      a?: number;&NewLine;      n?: number;&NewLine;      p?: number;&NewLine;      r?: number;&NewLine;    }&NewLine;  &gt;;&NewLine;};&NewLine;</code></pre>
<h3>Bulk upsert</h3>
<p>Based on the specifications, the following bulk updateOne operation is used for each event generated by the application:</p>
<pre><code tabindex="0">const YYYYMMDD = getYYYYMMDD(event.date); // Extract the year(YYYY), month(MM), and day(DD) from the `event.date`&NewLine;&NewLine;const operation = {&NewLine;  updateOne: {&NewLine;    filter: { _id: buildId(event.key, event.date) }, // key + year + quarter&NewLine;    update: {&NewLine;      $inc: {&NewLine;        [`items.${YYYYMMDD}.a`]: event.approved,&NewLine;        [`items.${YYYYMMDD}.n`]: event.noFunds,&NewLine;        [`items.${YYYYMMDD}.p`]: event.pending,&NewLine;        [`items.${YYYYMMDD}.r`]: event.rejected,&NewLine;      },&NewLine;    },&NewLine;    upsert: true,&NewLine;  },&NewLine;};&NewLine;</code></pre>
<p>This updateOne has almost exactly the same logic as the one for appV6R1. The difference is that the name of the fields in the items document will be created based on year, month, and day (YYYYMMDD) instead of just month and day (MMDD). This change was made to reduce the complexity of the aggregation pipeline of the Get Reports.</p>
<h3>Get reports</h3>
<p>To fulfill the Get Reports operation, one aggregation pipeline is required:</p>
<pre><code tabindex="0">const pipeline = [&NewLine;  { $match: docsFromKeyBetweenDate },&NewLine;  { $addFields: buildTotalsField },&NewLine;  { $group: groupCountTotals },&NewLine;  { $project: format },&NewLine;];&NewLine;</code></pre>
<p>This aggregation operation has a similar logic to the one in appV6R1, with the only differences being the implementation in the $addFields stage.</p>
<p><code tabindex="0">{ $addFields: buildTotalsField }</code></p>
<ul>
<li>
<p>It follows a similar logic to the previous revision, where we first convert the items document into an array using $objectToArray, and then use the reduce function to iterate over the array, accumulating the status.</p>
</li>
<li>
<p>The difference lies in the initial value and the logic of the reduce function.</p>
</li>
<li>
<p>The initial value in this case is an object/document with one field for each of the report date ranges. These fields for each report date range are also an object/document, with their fields being the possible status set to zero, as this is the initial value.</p>
</li>
<li>
<p>The logic in this case checks the date range of the item and increments the totals accordingly. If the item isInOneYearDateRange(...), it is also in all the other date ranges: three, five, seven, and 10 years. If the item isInThreeYearsDateRange(...), it is also in all the other wide date ranges, five, seven, and 10 years.</p>
</li>
<li>
<p>The following JavaScript code is logic equivalent to the real aggregation pipeline code. Senior developers could make the argument that this implementation could be less verbose or more optimized. However, due to how MongoDB aggregation pipeline operators are specified, this is how it was implemented.</p>
</li>
</ul>
<pre><code tabindex="0">const itemsArray = Object.entries(items); // Convert the object to an array of [key, value]&NewLine;&NewLine;const totals = itemsArray.reduce(&NewLine;  (totals, [YYYYMMDD, status]) =&gt; {&NewLine;  const [YYYY] = YYYYMMDD.slice(0, 4).toString(); // Get year&NewLine;  const [MM] = YYYYMMDD.slice(4, 6).toString(); // Get month&NewLine;  const [DD] = YYYYMMDD.slice(6, 8).toString(); // Get day&NewLine;    let statusDate = new Date(`${YYYY}-${MM}-${DD}`);&NewLine;&NewLine;    if isInOneYearDateRange(statusDate) {&NewLine;      totals.oneYear = incrementTotals(totals.oneYear, status);&NewLine;      totals.threeYears = incrementTotals(totals.threeYears, status);&NewLine;      totals.fiveYears = incrementTotals(totals.fiveYears, status);&NewLine;      totals.sevenYears = incrementTotals(totals.sevenYears, status);&NewLine;      totals.tenYears = incrementTotals(totals.tenYears, status);&NewLine;    } else if isInThreeYearsDateRange(statusDate) {&NewLine;      totals.threeYears = incrementTotals(totals.threeYears, status);&NewLine;      totals.fiveYears = incrementTotals(totals.fiveYears, status);&NewLine;      totals.sevenYears = incrementTotals(totals.sevenYears, status);&NewLine;      totals.tenYears = incrementTotals(totals.tenYears, status);&NewLine;    } else if isInFiveYearsDateRange(statusDate) {&NewLine;      totals.fiveYears = incrementTotals(totals.fiveYears, status);&NewLine;      totals.sevenYears = incrementTotals(totals.sevenYears, status);&NewLine;      totals.tenYears = incrementTotals(totals.tenYears, status);&NewLine;    } else if isInSevenYearsDateRange(statusDate) {&NewLine;      totals.sevenYears = incrementTotals(totals.sevenYears, status);&NewLine;      totals.tenYears = incrementTotals(totals.tenYears, status);&NewLine;    } else if isInTenYearsDateRange(statusDate) {&NewLine;      totals.tenYears = incrementTotals(totals.tenYears, status);&NewLine;    }&NewLine;&NewLine;    return totals;&NewLine;  },&NewLine;  {&NewLine;    oneYear: { a: 0, n: 0, p: 0, r: 0 },&NewLine;    threeYears: { a: 0, n: 0, p: 0, r: 0 },&NewLine;    fiveYears: { a: 0, n: 0, p: 0, r: 0 },&NewLine;    sevenYears: { a: 0, n: 0, p: 0, r: 0 },&NewLine;    tenYears: { a: 0, n: 0, p: 0, r: 0 },&NewLine;  },&NewLine;);&NewLine;</code></pre>
<h3>Indexes</h3>
<p>No additional indexes are required, maintaining the single _id index approach established in the appV4 implementation.</p>
<h3>Initial scenario statistics</h3>
<h4>Collection statistics</h4>
<p>To evaluate the performance of appV6R3, we inserted 500 million event documents into the collection using the schema and Bulk Upsert function described earlier. For comparison, the tables below also include statistics from previous comparable application versions:</p>
<html>
<head>
     <style>
    table,
    th,
    td {
        border: 1px solid black;
        border-collapse: collapse;
    }
    th,
    td {
        padding: 5px;
    }
    </style>
</head>
<body>
    <table style="width:100%">
        <tr>
            <th>Collection</th>
            <th>Documents</th>
					<th>Data Size</th>
					<th>Document Size</th>
					<th>Storage Size</th>
					<th>Indexes</th>
					<th>Index Size</th>
        </tr>
        <tr>
					<td>appV6R1</td>
					<td>33,429,366</td>
					<td>8.19GB</td>
					<td>264B</td>
					<td>2.34GB</td>
					<td>1</td>
					<td>1.22GB</td>
        </tr>
			 <tr>
					<td>appV6R2</td>
					<td>33,429,207</td>
					<td>9.11GB</td>
					<td>293B</td>
					<td>2.8GB</td>
					<td>1</td>
					<td>1.26GB</td>
        </tr>
			<tr>
					<td>appV6R3</td>
					<td>33,429,694</td>
					<td>9.53GB</td>
					<td>307B</td>
					<td>2.56GB</td>
					<td>1</td>
					<td>1.19GB</td>
        </tr>
    </table>
</body>
</html>
<h4>Event statistics</h4>
<p>To evaluate the storage efficiency per event, the Event Statistics are calculated by dividing the total data size and index size by the 500 million events.</p>
<html>
<head>
     <style>
    table,
    th,
    td {
        border: 1px solid black;
        border-collapse: collapse;
    }
    th,
    td {
        padding: 5px;
    }
    </style>
</head>
<body>
    <table style="width:100%">
        <tr>
            <th>Collection</th>
            <th>Data Size/Events</th>
					<th>Index Size/Events</th>
					<th>Total Size/Events</th>
			</tr>
			<tr>
				<td>appV6R1</td>
				<td>17.6B</td>
				<td>2.6B</td>
				<td>20.2B</td>
			</tr>
			<tr>
				<td>appV6R2</td>
				<td>19.6B</td>
				<td>2.7B</td>
				<td>22.3B</td>
			</tr>
			<tr>
				<td>appV6R3</td>
				<td>20.5B</td>
				<td>2.6B</td>
				<td>23.1B</td>
			</tr>
			    </table>
</body>
</html>
<p>Because we are adding the year (YYYY) information in the name of each items document field, we got a 16.3% increase in storage size when compared to appV6R1 and a 4.8% increase in storage size when compared to appV6R2. This increase in storage size may be compensated by the gains in the Get Reports function, as we saw when going from appV6R1 to appV6R2.</p>
<h3>Load test results</h3>
<p>Executing the load test for appV6R3 and plotting it alongside the results for appV6R2, we have the following results for Get Reports and Bulk Upsert.</p>
<h4>Get Reports rate</h4>
<p>We achieved a significant improvement by transitioning from appV6R2 to appV6R3. For the first time, the application successfully reached all the desired rates in a single phase.</p>
<center><caption><b>Figure 13.</b>Graph showing the rates of appV6R2 and appV6R3 when executing the load test for Get Reports functionality. appV6R3 has better rates than appV6R2, but without reaching the desired rates.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-10-08 at 1.49.51 PM-1hzjkoxux9.png" alt="Graph showing the rates of appV6R2 and appV6R3 when executing the load test for Get Reports functionality. appV6R3 has better rates than appV6R2, but without reaching the desired rates." title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<h4>Get Reports latency</h4>
<p>The latency saw significant improvements, with the peak value reduced by 71% in the first phase, 67% in the second phase, 47% in the third phase, and 30% in the fourth phase.</p>
<center><caption><b>Figure 14.</b> Graph showing the latency of appV6R2 and appV6R3 when executing the load test for Get Reports functionality. appV6R3 has lower latency than appV6R2.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-10-08 at 1.51.45 PM-3dsn5d9if4.png" alt="Graph showing the latency of appV6R2 and appV6R3 when executing the load test for Get Reports functionality. appV6R3 has lower latency than appV6R2." title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<h4>Bulk Upsert rate</h4>
<p>As had happened in the previous version, the application was able to reach all the desired rates.</p>
<center><caption><b>Figure 15.</b> Graph showing the rates of appV6R2 and appV6R3 when executing the load test for Bulk Upsert functionality. appV6R3 has better rates than appV6R2, and reaches the desired rates.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-10-08 at 1.55.47 PM-ey7yp55sw1.png" alt="Graph showing the rates of appV6R2 and appV6R3 when executing the load test for Bulk Upsert functionality. appV6R3 has better rates than appV6R2, and reaches the desired rates. " title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<h4>Bulk Upsert latency</h4>
<p>Here, we have one of the most significant gains in this series: The latency has decreased from seconds to milliseconds. We went from a peak of 1.8 seconds to 250ms in the first phase, from 2.3 seconds to 400ms in the second phase, from 2 seconds to 600ms in the third phase, and from 2.2 seconds to 800ms in the fourth phase.</p>
<center><caption><b>Figure 16.</b> Graph showing the latency of appV6R2 and appV6R3 when executing the load test for Bulk Upsert functionality. appV6R3 has lower latency than appV6R2.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-10-08 at 1.58.48 PM-441rmwiewd.png" alt="Graph showing the latency of appV6R2 and appV6R3 when executing the load test for Bulk Upsert functionality. appV6R3 has lower latency than appV6R2. " title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<h3>Issues and improvements</h3>
<p>The main bottleneck in our MongoDB server is still the disk throughput. As mentioned in the previous Issues and Improvements, this was the application-level improvement. How can we further optimize on our current hardware?</p>
<p>If we take a closer look at the <a href="https://www.mongodb.com/docs/manual/core/wiredtiger/#compression">MongoDB documentation</a>, we'll find out that by default, it uses block compression with the snappy compression library for all collections. Before the data is written to disk, it'll be compressed using the snappy library to reduce its size and speed up the writing process.</p>
<p>Would it be possible to use a different and more effective compression library to reduce the size of the data even further and, as a consequence, reduce the load on the server's disk? Yes, and in the following application revision, we will use the zstd compression library instead of the default snappy compression library.</p>
<h2>Application version 6 revision 4 (appV6R4)</h2>
<p>As discussed in the previous Issues and Improvements section, the performance gains of this version will be provided by changing the algorithm of the <a href="https://www.mongodb.com/docs/manual/reference/configuration-options/#mongodb-setting-storage.wiredTiger.collectionConfig.blockCompressor">collection block compressor</a>. By default, MongoDB uses the <a href="https://www.mongodb.com/docs/manual/reference/glossary/#std-term-snappy">snappy</a>, which we will change to zstd to achieve a better compression performance at the expense of more CPU usage.</p>
<p>All the schemas, functions, and code from this version are exactly the same as the appV6R3.</p>
<p>To create a collection that uses the zstd compression algorithm, the following command can be used.</p>
<pre><code tabindex="0">db.createCollection(&quot;&lt;collection-name&gt;&quot;, {&NewLine;  storageEngine: { wiredTiger: { configString: &quot;block_compressor=zstd&quot; } },&NewLine;});&NewLine;</code></pre>
<h3>Schema</h3>
<p>The application implementation presented above would have the following TypeScript document schema denominated SchemaV6R0:</p>
<pre><code tabindex="0">export type SchemaV6R0 = {&NewLine;  _id: Buffer;&NewLine;  items: Record&lt;&NewLine;    string,&NewLine;    {&NewLine;      a?: number;&NewLine;      n?: number;&NewLine;      p?: number;&NewLine;      r?: number;&NewLine;    }&NewLine;  &gt;;&NewLine;};&NewLine;</code></pre>
<h3>Bulk upsert</h3>
<p>Based on the specifications, the following bulk updateOne operation is used for each event generated by the application:</p>
<pre><code tabindex="0">const YYYYMMDD = getYYYYMMDD(event.date); // Extract the year(YYYY), month(MM), and day(DD) from the `event.date`&NewLine;&NewLine;const operation = {&NewLine;  updateOne: {&NewLine;    filter: { _id: buildId(event.key, event.date) }, // key + year + quarter&NewLine;    update: {&NewLine;      $inc: {&NewLine;        [`items.${YYYYMMDD}.a`]: event.approved,&NewLine;        [`items.${YYYYMMDD}.n`]: event.noFunds,&NewLine;        [`items.${YYYYMMDD}.p`]: event.pending,&NewLine;        [`items.${YYYYMMDD}.r`]: event.rejected,&NewLine;      },&NewLine;    },&NewLine;    upsert: true,&NewLine;  },&NewLine;};&NewLine;</code></pre>
<p>This updateOne is exactly the same logic as the one for appV6R3.</p>
<h3>Get reports</h3>
<p>Based on the information​​ presented in the Introduction, we have the following aggregation pipeline to generate the reports document.</p>
<pre><code tabindex="0">const pipeline = [&NewLine;  { $match: docsFromKeyBetweenDate },&NewLine;  { $addFields: buildTotalsField },&NewLine;  { $group: groupCountTotals },&NewLine;  { $project: format },&NewLine;];&NewLine;</code></pre>
<p>This pipeline is exactly the same logic as the one for appV6R3.</p>
<h3>Indexes</h3>
<p>No additional indexes are required, maintaining the single _id index approach established in the appV4 implementation.</p>
<h3>Initial scenario statistics</h3>
<h4>Collection statistics</h4>
<p>To evaluate the performance of appV6R4, we inserted 500 million event documents into the collection using the schema and Bulk Upsert function described earlier. For comparison, the tables below also include statistics from previous comparable application versions:</p>
<html>
<head>
     <style>
    table,
    th,
    td {
        border: 1px solid black;
        border-collapse: collapse;
    }
    th,
    td {
        padding: 5px;
    }
    </style>
</head>
<body>
    <table style="width:100%">
        <tr>
            <th>Collection</th>
            <th>Documents</th>
					<th>Data Size</th>
					<th>Document Size</th>
					<th>Storage Size</th>
					<th>Indexes</th>
					<th>Index size</th>
        </tr>
        <tr>
					<td>appV6R3</td>
					<td>33,429,694</td>
					<td>9.53GB</td>
					<td>307B</td>
					<td>2.56GB</td>
					<td>1</td>
					<td>1.19GB</td>
        </tr>
			  <tr>
					<td>appV6R4</td>
					<td>33,429,372</td>
					<td>9.53GB</td>
					<td>307B</td>
					<td>1.47GB</td>
					<td>1</td>
					<td>1.34GB</td>
        </tr>
    </table>
</body>
</html>
<h4>Event statistics</h4>
<p>To evaluate the storage efficiency per event, the Event Statistics are calculated by dividing the total data size and index size by the 500 million events.</p>
<html>
<head>
     <style>
    table,
    th,
    td {
        border: 1px solid black;
        border-collapse: collapse;
    }
    th,
    td {
        padding: 5px;
    }
    </style>
</head>
<body>
    <table style="width:100%">
        <tr>
            <th>Collection</th>
            <th>Storage Size/Events</th>
					<th>Index Size/Events</th>
					<th>Total Storage Size/Events</th>
        </tr>
        <tr>
					<td>appV6R3</td>
					<td>5.5B</td>
					<td>2.6B</td>
					<td>8.1B</td>
        </tr>
			  <tr>
					<td>appV6R4</td>
					<td>3.2B</td>
					<td>2.8B</td>
					<td>6.0B</td>
        </tr>
    </table>
</body>
</html>
<p>Since the application implementation of appV6R4 is the same as appV5R3, the values for Data Size, Document Size, and Index Size remain the same. The difference lies in Storage Size, which represents the Data Size after compression. Going from snappy to zstd decreased the Storage Size a jaw-dropping 43%. Looking at the Event Statistics, there was a reduction of 26% of the storage required to register each event, going from 8.1 bytes to 6 bytes. These considerable reductions in size will probably translate to better performance on this version, as our main bottleneck is disk throughput.</p>
<h3>Load test results</h3>
<p>Executing the load test for appV6R4 and plotting it alongside the results for appV6R3, we have the following results for Get Reports and Bulk Upsert.</p>
<h4>Get Reports rate</h4>
<p>Although we didn't achieve all the desired rates, we saw a significant improvement from appV6R3 to appV6R4. This revision allowed us to reach the desired rates in the first, second, and third quarters.</p>
<center><caption><b>Figure 17.</b> Graph showing the rates of appV6R3 and appV6R4 when executing the load test for Get Reports functionality. appV6R4 has better rates than appV6R3, but without reaching the desired rates.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-10-08 at 2.14.11 PM-tg3ngcevab.png" alt="Graph showing the rates of appV6R3 and appV6R4 when executing the load test for Get Reports functionality. appV6R4 has better rates than appV6R3, but without reaching the desired rates." title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<h4>Get Reports latency</h4>
<p>The latency also saw significant improvements, with the peak value reduced by 30% in the first phase, 57% in the second phase, 61% in the third phase, and 57% in the fourth phase.</p>
<center><caption><b>Figure 18.</b> Graph showing the latency of appV6R3 and appV6R4 when executing the load test for Get Reports functionality. appV6R4 has lower latency than appV6R3.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-10-08 at 2.16.10 PM-dg30y19axi.png" alt="Graph showing the latency of appV6R3 and appV6R4 when executing the load test for Get Reports functionality. appV6R4 has lower latency than appV6R3." title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<h4>Bulk Upsert rate</h4>
<p>As had happened in the previous version, the application was able to reach all the desired rates.</p>
<center><caption><b>Figure 19.</b> Graph showing the rates of appV6R3 and appV6R4 when executing the load test for Bulk Upsert functionality. Both versions reach the desired rates.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-10-08 at 2.17.35 PM-h4nltf0qv3.png" alt="Graph showing the rates of appV6R3 and appV6R4 when executing the load test for Bulk Upsert functionality. Both versions reach the desired rates." title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<h4>Bulk Upsert latency</h4>
<p>Here, we also achieved considerable improvements, with the peak value being reduced by 48% in the first phase, 39% in the second phase, 43% in the third phase, and 47% in the fourth phase.</p>
<center><caption><b>Figure 20.</b> Graph showing the latency of appV6R3 and appV6R4 when executing the load test for Bulk Upsert functionality. appV6R4 has lower latency than appV6R3.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-10-08 at 2.19.25 PM-bscvof4mm7.png" alt="Graph showing the latency of appV6R3 and appV6R4 when executing the load test for Bulk Upsert functionality. appV6R4 has lower latency than appV6R3. " title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<h3>Issues and improvements</h3>
<p>Although this is the final version of the series, there is still room for improvement. For those willing to try them by themselves, here are the ones that I was able to think of:</p>
<ul>
<li>
<p>Use the Computed Pattern in the appV6R4.</p>
</li>
<li>
<p>Optimize the aggregation pipeline logic for Get Reports in the appV6R4.</p>
</li>
<li>
<p>Change the <a href="https://www.mongodb.com/docs/manual/reference/configuration-options/#mongodb-setting-storage.wiredTiger.engineConfig.zstdCompressionLevel">zstd compression level</a> from its default value of 6 to a higher value.</p>
</li>
</ul>
<h2>Conclusion</h2>
<p>This final part of &quot;The Cost of Not Knowing MongoDB&quot; series has explored the ultimate evolution of MongoDB application optimization, demonstrating how revolutionary design patterns and infrastructure-level improvements can transcend traditional performance boundaries. The journey through appV6R0 to appV6R4 represents the culmination of sophisticated MongoDB development practices, achieving performance levels that seemed impossible with the baseline appV1 implementation.</p>
<h3>Series transformation summary</h3>
<p><b>From foundation to revolution: </b>The complete series showcases a remarkable transformation across three distinct optimization phases.</p>
<ul>
<li>
<p><b>Part 1</b> (appV1-appV4): Document-level optimizations achieving 51% storage reduction through schema refinement, data type optimization, and strategic indexing.</p>
</li>
<li>
<p><b>Part 2</b> (appV5R0-appV5R4): Advanced pattern implementation with the Bucket and Computed Patterns, delivering 89% index size reduction and first-time achievement of target rates.</p>
</li>
<li>
<p><b>Part 3</b> (appV6R0-appV6R4): Revolutionary Dynamic Schema Pattern with infrastructure optimization, culminating in sub-second latencies and comprehensive target rate achievement.</p>
</li>
</ul>
<p><b>Performance evolution:</b> The progression reveals exponential improvements across all metrics.</p>
<ul>
<li>
<p><b>Get Reports latency:</b> From 6.5 seconds (appV1) to 200-800ms (appV6R4)—a 92% improvement.</p>
</li>
<li>
<p><b>Bulk Upsert latency:</b> From 62 seconds (appV1) to 250-800ms (appV6R4)—a 99% improvement.</p>
</li>
<li>
<p><b>Storage efficiency:</b> From 128.1B per event (appV1) to 6.0B per event (appV6R4)—a 95% reduction.</p>
</li>
<li>
<p><b>Target rate achievement:</b> From consistent failures to sustained success across all operational phases.</p>
</li>
</ul>
<h3>Architectural paradigm shifts</h3>
<p><b>The Dynamic Schema Pattern revolution:</b> appV6R0 through appV6R4 introduced the most sophisticated MongoDB design pattern explored in this series. The Dynamic Schema Pattern fundamentally redefined data organization by</p>
<ul>
<li>
<p><b>Eliminating array overhead:</b> Replacing MongoDB arrays with computed object structures to minimize storage and processing costs.</p>
<ul>
<li><b>Single-pipeline optimization:</b> Consolidating five separate aggregation pipelines into one optimized operation, reducing computational overhead by 80%.</li>
</ul>
</li>
<li>
<p><b>Infrastructure-level optimization:</b> Implementing zstd compression, achieving 43% additional storage reduction over default snappy compression.</p>
</li>
</ul>
<p><b>Query optimization breakthroughs:</b> The implementation of intelligent date range calculation within aggregation pipelines eliminated redundant operations while maintaining data accuracy. This approach demonstrates senior-level MongoDB development by leveraging advanced aggregation framework capabilities to achieve both performance and maintainability.</p>
<h3>Critical technical insights</h3>
<p><b>Performance bottleneck evolution:</b> Throughout the series, we observed how optimization focus shifted as bottlenecks were resolved</p>
<ol>
<li>
<p><b>Initial phase:</b> Index size and query inefficiency dominated performance.</p>
</li>
<li>
<p><b>Intermediate phase:</b> Document retrieval count became the limiting factor.</p>
</li>
<li>
<p><b>Advanced phase:</b> Aggregation pipeline complexity constrained throughput.</p>
</li>
<li>
<p><b>Final phase:</b> Disk I/O emerged as the ultimate hardware limitation.</p>
</li>
</ol>
<p><b>Pattern application maturity:</b>
The series demonstrates the progression from junior to senior MongoDB development practices</p>
<ul>
<li>
<p><b>Junior level:</b> Schema design without understanding indexing implications (appV1)</p>
</li>
<li>
<p><b>Intermediate level:</b> Applying individual optimization techniques (appV2-appV4)</p>
</li>
<li>
<p><b>Advanced level:</b> Implementing established MongoDB patterns (appV5RX)</p>
</li>
<li>
<p><b>Senior level:</b> Creating custom patterns and infrastructure optimization (appV6RX)</p>
</li>
</ul>
<h3>Production implementation guidelines</h3>
<p><b>When to apply each pattern:</b> Based on the comprehensive analysis, the following guidelines emerge for production implementations</p>
<ul>
<li>
<p><b>Document-level optimizations:</b> Essential for all MongoDB applications, providing 40-60% improvement with minimal complexity</p>
</li>
<li>
<p><b>Bucket Pattern:</b> Optimal for time-series data with 10:1 or greater read-to-write ratios</p>
</li>
<li>
<p><b>Computed Pattern:</b> Most effective in read-heavy scenarios with predictable aggregation requirements</p>
</li>
<li>
<p><b>Dynamic Schema Pattern:</b> Reserved for high-performance applications where development complexity trade-offs are justified</p>
</li>
</ul>
<p><b>Infrastructure considerations:</b> The zstd compression implementation in appV6R4 demonstrates that infrastructure-level optimizations can provide substantial benefits (40%+ storage reduction) with minimal application changes. However, these optimizations require careful CPU utilization monitoring and may not be suitable for CPU-constrained environments.</p>
<h3>The true cost of not knowing MongoDB</h3>
<p>This series reveals that the &quot;cost&quot; extends far beyond mere performance degradation:</p>
<p><b>Quantifiable impacts:</b></p>
<ul>
<li>
<p><b>Resource utilization:</b> Up to 20x more storage requirements for equivalent functionality</p>
</li>
<li>
<p><b>Infrastructure costs:</b> Potentially 10x higher hardware requirements due to inefficient patterns</p>
</li>
<li>
<p><b>Developer productivity:</b> Months of optimization work that could be avoided with proper initial design</p>
</li>
<li>
<p><b>Scalability limitations:</b> Fundamental architectural constraints that become exponentially expensive to resolve</p>
</li>
</ul>
<p><b>Hidden complexities:</b> More critically, the series demonstrates that MongoDB's apparent simplicity can mask sophisticated optimization requirements. The transition from appV1 to appV6R4 required a deep understanding of</p>
<ul>
<li>
<p>Aggregation framework internals and optimization strategies.</p>
</li>
<li>
<p>Index behavior with different data types and query patterns.</p>
</li>
<li>
<p>Storage engine compression algorithms and trade-offs.</p>
</li>
<li>
<p>Memory management and cache utilization patterns.</p>
</li>
</ul>
<h3>Final recommendations</h3>
<p><b>For development teams:</b></p>
<ol>
<li>
<p><b>Invest in MongoDB education:</b> The performance differences documented in this series justify substantial training investments.</p>
</li>
<li>
<p><b>Establish pattern libraries:</b> Codify successful patterns like those demonstrated to prevent anti-pattern adoption.</p>
</li>
<li>
<p><b>Implement performance testing:</b> Regular load testing reveals optimization opportunities before they become production issues.</p>
</li>
<li>
<p><b>Plan for iteration:</b> Schema evolution is inevitable; design systems that accommodate architectural improvements.</p>
</li>
</ol>
<p><b>For architectural decisions:</b></p>
<ol>
<li>
<p><b>Start with fundamentals:</b> Proper indexing and schema design provide the foundation for all subsequent optimizations.</p>
</li>
<li>
<p><b>Measure before optimizing:</b> Each optimization phase in this series was guided by comprehensive performance measurement.</p>
</li>
<li>
<p><b>Consider total cost of ownership:</b> The development complexity of advanced patterns must be weighed against performance requirements.</p>
</li>
<li>
<p><b>Plan infrastructure scaling:</b> Understanding that hardware limitations will eventually constrain software optimizations.</p>
</li>
</ol>
<h3>Closing reflection</h3>
<p>The journey from appV1 to appV6R4 demonstrates that MongoDB mastery requires understanding not just the database itself, but the intricate relationships between schema design, query patterns, indexing strategies, aggregation frameworks, and infrastructure capabilities. The 99% performance improvements documented in this series are achievable, but they demand dedication to continuous learning and sophisticated engineering practices.</p>
<p>For organizations serious about MongoDB performance, this series provides both a roadmap for optimization and a compelling case for investing in advanced MongoDB expertise. The cost of not knowing MongoDB extends far beyond individual applications—it impacts entire technology strategies and competitive positioning in data-driven markets.</p>
<p>The patterns, techniques, and insights presented throughout this three-part series offer a comprehensive foundation for building high-performance MongoDB applications that can scale efficiently while maintaining operational excellence. Most importantly, they demonstrate that with proper knowledge and application, MongoDB can deliver extraordinary performance that justifies its position as a leading database technology for modern applications.</p>
<div class="callout">
<p><b>Learn more about <a href="https://www.mongodb.com/company/blog/building-with-patterns-a-summary">MongoDB design patterns</a>!</b></p>
<p><b>Check out more posts from <a href="https://www.mongodb.com/developer/author/artur-costa/">Artur Costa</a>.</b></p>
</div>	]]></description>
      <pubDate>Thu, 09 Oct 2025 15:00:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/technical/cost-of-not-knowing-mongodb-part-3-appv6r0-appv6r4</link>
      <guid>https://www.mongodb.com/company/blog/technical/cost-of-not-knowing-mongodb-part-3-appv6r0-appv6r4</guid>
    </item><item>
      <title>Innovating with MongoDB | Customer Successes, October 2025</title>
      <description><![CDATA[<p>It’s officially fall! The start of every new season is a perfect time to consider change and new beginnings. While fall might make you think about pumpkin spice and newly chilly evenings, I’m thinking about the latest round of transformations that MongoDB’s customers are embracing to thrive in an AI-powered world.</p>
<p>In all seriousness, legacy systems and technical debt are huge challenges: the cost of tech debt has been estimated at almost $4 trillion dollars. That’s trillion with a T! Legacy systems can slow down innovation, create bottlenecks, and make it tough to deliver the seamless, real-time experiences customers increasingly expect. But companies are finding that modernizing their applications isn't just about fixing what's broken—modernization enables them to move faster and innovate for end-users.</p>
<p>That’s why I'm incredibly excited to share the recent launch of MongoDB’s <a href="https://www.mongodb.com/solutions/use-cases/modernize">Application Modernization Platform</a> (AMP). This AI-powered program is designed to help enterprises move beyond outdated infrastructures to embrace a flexible, data-driven future. AMP is a comprehensive approach to modernization that combines smart AI tooling with proven methodologies, enabling businesses to transform their applications from the ground up, moving from legacy monoliths to a more flexible, microservices-based architecture.</p>
<p>In this roundup, we're spotlighting customers who understand the strategic importance of modernization. You'll see how Wells Fargo is using MongoDB to power a new credit card platform, how CSX is ensuring business continuity during a critical migration, how Intellect Design is modernizing its wealth management platform, and how Deutsche Telekom is transforming its B2C digital channels. With MongoDB, customers are showing how integral a <a href="https://www.mongodb.com/resources/solutions/use-cases/innovate-and-modernize">modern database</a> is to powering the next generation of applications—and succeeding in the AI era.</p>
<h2>Wells Fargo</h2>
<p><a href="https://www.mongodb.com/solutions/customer-case-studies/wells-fargo?tck=customer_blog_october_25">Wells Fargo</a> sought to modernize its mainframe-dependent credit card platform to provide a faster, more seamless customer experience and handle an exponential increase in transaction data. The company's legacy system was costly to manage and lacked the scalability needed for its &quot;Cards 2.0&quot; initiative.</p>
<p>To solve this, Wells Fargo built an operational data store (ODS) using MongoDB. This new platform allowed them to adopt reusable APIs, streamline integrations, and move from a monolithic architecture to flexible microservices. The ODS now serves 40% of traffic from external vendors, handling more than 7 million transactions with sub-second service.</p>
<p>By leveraging MongoDB, Wells Fargo was able to jumpstart its <a href="https://www.mongodb.com/solutions/use-cases/mainframe-modernization">mainframe modernization</a> and create curated data products to serve real-time, personalized financial services.</p>
<h2>CSX</h2>
<p><a href="https://www.mongodb.com/solutions/customer-case-studies/csx?tck=customer_blog_october_25">CSX</a>, a major U.S. railroad company, sought to modernize its critical operations platform, RTOP, by migrating it to the cloud. The challenge was to maintain the platform's 24/7 availability with minimal disruption to its mission-critical, near real-time operations during the transition.</p>
<p>To solve this, CSX selected <a href="https://www.mongodb.com/products/platform/atlas-cloud-providers/azure">MongoDB Atlas on Azure</a> and partnered with <a href="https://www.mongodb.com/services/consulting">MongoDB Professional Services</a>. Leveraging the <a href="https://www.mongodb.com/products/tools/cluster-to-cluster-sync">Cluster-to-Cluster Sync (mongosync)</a> feature, the team was able to facilitate continuous data synchronization and complete the entire migration in just a few hours.</p>
<p>The move to MongoDB Atlas has equipped CSX with a more scalable and resilient platform. This modernization effort established a blueprint for migrating other critical applications and helped CSX continue its digital transformation journey toward becoming America’s best-run railroad.</p>
<h2>Intellect Design</h2>
<p><a href="https://www.mongodb.com/company/blog/innovation/intellect-design-accelerates-legacy-modernization-by-200-percent-mongodb-gen-ai?tck=customer_blog_october_25">Intellect Design</a>, a global fintech company, sought to modernize its wealth management platform to overcome legacy system bottlenecks and multihour batch processing delays. The company's rigid relational database architecture limited its ability to scale and innovate.</p>
<p>To solve this, the company partnered with MongoDB, using our <a href="https://www.mongodb.com/company/blog/product-release-announcements/amp-ai-driven-approach-modernization">AMP methodology</a> and generative AI tools. This transformation reengineered the platform's core components, resulting in an 85% reduction in onboarding workflow times, allowing clients to access critical portfolio insights faster than ever.</p>
<p>This initiative is the first step in Intellect Design's long-term vision to integrate its entire application suite into a unified, AI-driven service. By leveraging MongoDB Atlas's flexible schema and powerful native tools, the company is now better positioned to deliver smarter analytics and advanced AI capabilities to its customers.</p>
<div class="callout">
<p><b>Watch Intellect AI’s MongoDB.local Bengaluru <a href="https://youtu.be/FESKCQuQZVA?si=-7zndLmpxb__iknV&t=1316" target="_blank">keynote presentation</a> to learn how AMP helped them transform outdated systems into scalable, modern solutions. </b></p>
</div>	
<h2>Deutsche Telekom</h2>
<p><a href="https://www.mongodb.com/solutions/customer-case-studies/dt?tck=customer_blog_october_25">Deutsche Telekom</a>, a leading telecommunications company, sought to modernize its B2C digital channels, which were fragmented by outdated legacy systems. The company needed to create a unified digital experience for its 30 million customers while improving developer productivity.</p>
<p>By leveraging <a href="https://www.mongodb.com/products/platform/atlas-database">MongoDB Atlas</a> as part of its Internal Developer Platform, Deutsche Telekom built a robust data infrastructure to unify customer data and power its new digital services. This approach allowed the company to retire legacy systems and reduce its reliance on physical shops and call centers.</p>
<p>The transition to MongoDB Atlas led to a massive surge in digital engagement, with daily customer interactions rising from under 50,000 to approximately 1.5 million. The company's customer data platform now handles up to 15 times the load of legacy systems, supporting large-scale loyalty programs and transforming the customer experience.</p>
<h2>Video spotlight: Bendigo Bank</h2>
<p>Before you go, watch how Bendigo and Adelaide Bank modernized their core banking technology using MongoDB Atlas and generative AI.</p>
<iframe width="800" height="425" src="https://www.youtube.com/embed/xv9vIftwvXA?si=gfOHY_GX5IIwViDK" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
<center><figcaption class="fl-center">Bendigo and Adelaide Bank reduced the migration time for legacy applications from 80 hours to just five minutes. This innovative approach allowed them to quickly modernize their systems and better serve their 2.5 million customers. </figcaption></center>
<div class="callout">
<p><b>Want to get inspired by your peers and discover all the ways we empower businesses to innovate for the future? Visit <a href="https://www.mongodb.com/solutions/customer-case-studies">MongoDB’s Customer Success Stories hub</a> to see why these customers, and so many more, build modern applications with MongoDB.</b></p>
</div>	]]></description>
      <pubDate>Thu, 02 Oct 2025 18:52:47 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/innovation/innovating-customer-successes-october-2025</link>
      <guid>https://www.mongodb.com/company/blog/innovation/innovating-customer-successes-october-2025</guid>
    </item><item>
      <title>The 10 Skills I Was Missing as a MongoDB User</title>
      <description><![CDATA[<p>When I first started using MongoDB, I didn’t have a plan beyond “install it and hope for the best.” I had read about how flexible it was, and it felt like all the developers swore by it, so I figured I’d give it a shot. I spun it up, built my first application, and got a feature working.</p>
<p>But I felt like something was missing.</p>
<p>It felt clunky. My queries were longer than I expected, and performance wasn’t great; I had the sense that I was fighting with the database instead of working with it. After a few projects like that, I began to wonder if maybe MongoDB wasn’t for me.</p>
<p>Looking back now, I can say the problem wasn’t MongoDB, but was somewhere between the keyboard and the chair. It was me. I was carrying over habits from years of working with relational databases, expecting the same rules to apply.</p>
<p>If <a href="https://learn.mongodb.com/skills">MongoDB’s Skill Badges</a> had existed when I started, I think my learning curve would have been a lot shorter. I had to learn many lessons the hard way, but these new badges cover the skills I had to piece together slowly. Instead of pretending I nailed it from day one, here’s the honest version of how I learned MongoDB, what tripped me up along the way, and how these Skill Badges would have helped.</p>
<h2>Learning to model the MongoDB way</h2>
<p>The first thing I got wrong was data modeling. I built my schema like I was still working in SQL– every entity in its own collection, always referencing instead of embedding, and absolutely no data duplication. It felt safe because it was familiar.</p>
<p>Then I hit my first complex query. It required data from various collections, and suddenly, I found myself writing a series of queries and stitching them together in my code. It worked, but it was a messy process.</p>
<p>When I discovered embedding, it felt like I had found a cheat code. I could put related data together in one single document, query it in one shot, and get better performance.</p>
<p>That’s when I made my second mistake. I started embedding everything.</p>
<p>At first, it seemed fine. However, my documents grew huge, updates became slower, and I was duplicating data in ways that created consistency issues. That’s when I learned about patterns like Extended References, and more generally, how to choose between embedding and referencing based on access patterns and update frequency.</p>
<p>Later, I ran into more specialized needs, such as pre-computing data, embedding a subset of a large dataset into a parent, and tackling schema versioning. Back then, I learned those patterns by trial and error. Now, they’re covered in badges like <a href="https://learn.mongodb.com/courses/relational-to-document-model">Relational to Document Model</a>, <a href="https://learn.mongodb.com/courses/schema-design-patterns-and-antipatterns">Schema Design Patterns</a>, and <a href="https://learn.mongodb.com/courses/advanced-schema-patterns-and-antipatterns">Advanced Schema Patterns</a>.</p>
<h2>Fixing what I thought was “just a slow query”</h2>
<p>Even after I got better at modeling, performance issues kept popping up. One collection in particular started slowing down as it grew, and I thought, “I know what to do! I’ll just add some indexes.”</p>
<p>I added them everywhere I thought they might help. Nothing improved.</p>
<p>It turns out indexes only help if they match your query patterns. The order of fields matters, and whether you cover your query shapes will affect performance. Most importantly, just because you can add an index doesn’t mean that you should be adding it in the first place.</p>
<p>The big shift for me was learning to read an <code tabindex="0">explain()</code> plan and see how MongoDB was actually executing my queries. Once I started matching my indexes to my queries, performance went from “ok” to “blazing fast.”</p>
<p>Around the same time, I stopped doing all my data transformation in application code. Before, I’d pull in raw data and loop through it to filter, group, and calculate. It was slow, verbose, and easy to break. Learning the aggregation framework completely changed that. I could handle the filtering and grouping right in the database, which made my code cleaner and the queries faster.</p>
<p>There was a lot of guesswork in how I created my indexes, but the new <a href="https://learn.mongodb.com/courses/fundamentals-of-data-transformation">Indexing Design Fundamentals</a> covers that now. And when it comes to querying and analyzing data, <a href="https://learn.mongodb.com/courses/indexing-design-fundamentals">Fundamentals of Data Transformation</a> is there to help you. Had I had those two skills when I first started, I would’ve saved a lot of time wasted on trial and error.</p>
<h2>Moving from “it works” to “it works reliably”</h2>
<p>Early on, my approach to monitoring was simple: wait for something to break, then figure out why. If a performance went down, I’d poke around in logs. If a server stopped responding, I’d turn it off and on again, and hope for the best.</p>
<p>It was stressful, and it meant I was always reacting instead of preventing problems.</p>
<p>When I learned to use MongoDB’s monitoring tools properly, that changed. I could track latency, replication lag, and memory usage. I set alerts for unusual query patterns. I started seeing small problems before they turned into outages.</p>
<p>Performance troubleshooting became more methodical as well. Instead of guessing, I measured. Breaking down queries, checking index use, and looking at server metrics side by side. The fixes were faster and more precise.</p>
<p>Reliability was the last piece I got serious about. I used to think a working cluster was a reliable cluster. But reliability also means knowing what happens if a node fails, how quickly failover kicks in, and whether your recovery plan actually works in practice.</p>
<p>Those things you can now learn in the <a href="https://learn.mongodb.com/courses/monitoring-tooling">Monitoring Tooling</a>, <a href="https://learn.mongodb.com/courses/performance-tools-and-techniques">Performance Tools</a> and Techniques, and <a href="https://learn.mongodb.com/courses/cluster-reliability">Cluster Reliability</a> skill badges. If you are looking at deploying and maintaining MongoDB clusters, these skills will teach you what you need to know to make your deployment more resilient.</p>
<h2>Getting curious about what’s next</h2>
<p>Once my clusters were stable, I stopped firefighting, and my mindset changed. When you trust your data model, your indexes, your aggregations, and your operations, you get to relax. You can then spend that time on what’s coming next instead of fixing what’s already in production.</p>
<p>For me, that means exploring features I wouldn’t have touched earlier, like <a href="https://learn.mongodb.com/courses/search-fundamentals">Atlas Search</a>, gen AI, and <a href="https://learn.mongodb.com/courses/vector-search-fundamentals">Vector Search</a>. Now that the fundamentals are solid, I can experiment without risking stability and bring in new capabilities when a project actually calls for them.</p>
<h2>What I’d tell my past self</h2>
<p>If I could go back to when I first installed MongoDB, I’d keep it simple:</p>
<ul>
<li>
<p>Focus on data modeling first. A good foundation will save you from most of the problems I ran into.</p>
</li>
<li>
<p>Once you have that, learn indexing and aggregation pipelines. They will make your life much easier when querying.</p>
</li>
<li>
<p>Start monitoring from day one. It will save you a lot of trouble in the long run.</p>
</li>
<li>
<p>Take a moment to educate yourself. You can only learn so much from trial and error. MongoDB offers a myriad of resources and ways to upskill yourself.</p>
</li>
</ul>
<p>Once you have established that base, you can explore more advanced topics and uncover the full potential of MongoDB. Features like Vector Search, full-text search with Atlas Search, or advanced schema design patterns are much easier to adopt when you trust your data model and have confidence in your operational setup.</p>
<div class="callout">
<p><b>MongoDB Skill Badges cover all of these areas and more. They are short, practical, and focused on solving real problems you will face as a developer or DBA, and most of them can be taken over your lunch break. You can browse the full catalog at <a href="http://learn.mongodb.com/skills">learn.mongodb.com/skills</a> and pick the one that matches the challenge you are facing today. Keep going from there, and you might be surprised how much more you can get out of the database once you have the right skills in place.</b></p>
</div>	]]></description>
      <pubDate>Thu, 02 Oct 2025 15:31:41 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/technical/10-skills-was-missing-as-mongodb-user</link>
      <guid>https://www.mongodb.com/company/blog/technical/10-skills-was-missing-as-mongodb-user</guid>
    </item><item>
      <title>Smarter AI Search, Powered by MongoDB Atlas and Pureinsights</title>
      <description><![CDATA[<p>We’re excited to announce that the integration of <a href="https://www.mongodb.com/products/platform/atlas-database">MongoDB Atlas</a> with the <a href="https://cloud.mongodb.com/ecosystem/pureinsights">Pureinsights</a> Discovery Platform is now generally available—bringing to life a reimagined search experience powered by keyword, vector, and gen AI.</p>
<p>What if your search box didn’t just find results, but instead understood intent? That’s exactly what this integration delivers!</p>
<h2>Beyond search: From matching to meaning</h2>
<p>Developers rely on MongoDB’s expansive knowledge ecosystem to find answers fast. But even with a rich library of technical blogs, forum threads, and documentation, traditional keyword search often falls short—especially when queries are nuanced, multilingual, or context-driven.</p>
<p>That’s where the MongoDB-Pureinsights solution shines.
Built on MongoDB Atlas and orchestrated by the Pureinsights Discovery platform, this intelligent search experience starts with the fundamentals: fast, accurate keyword results, powered by <a href="https://www.mongodb.com/products/platform/atlas-search">MongoDB Atlas Search</a>.</p>
<p>But as queries grow more ambiguous—say, “tutorials for AI”—the platform steps up. <a href="https://www.mongodb.com/products/platform/atlas-vector-search">MongoDB Atlas Vector Search</a> with <a href="https://www.mongodb.com/products/platform/ai-search-and-retrieval">Voyage AI</a>, available as an embedding and reranking option (now part of MongoDB), goes beyond literal keywords to interpret intent—helping applications deliver smarter, more relevant results. The outcome: smarter, semantically aware responses that feel intuitive and accurate—because they are.</p>
<p>What’s more, with generative answers enabled, the platform synthesizes information across MongoDB’s ecosystem (blog content, forums, and technical docs) to deliver clear, contextual answers using state-of-the-art language models. But it's not just pointing you to the right page. Instead, the platform is providing the right answer, with citations, ready to use. It’s like embedding a domain-trained AI assistant directly into your search bar.</p>
<p>“As organizations look to move beyond traditional keyword search, they need solutions that combine speed, relevance, and contextual understanding,” said Haim Ribbi, Vice President, Global CSI, VAR &amp; Tech Partner at MongoDB. “MongoDB Atlas provides the foundation for smarter discovery, and this collaboration with Pureinsights shows how easily teams can deliver gen AI-powered search experiences using their existing content.”</p>
<h2>Built for users everywhere</h2>
<p>But intelligence alone doesn’t make it transformational. What sets this experience apart is its adaptability. Whether you’re a developer troubleshooting in Berlin or a product owner building in São Paulo, the platform tailors responses to your preferences.</p>
<p>Prefer concise summaries or deep technical dives? Want to translate answers in real time? Need responses that reflect your role and context? You’re in control. From tone and length to language and specificity, this is a search that truly understands you—literally and figuratively.</p>
<h2>Built on MongoDB. Elevated by Voyage AI. Delivered by Pureinsights.</h2>
<p>At the core of this solution is MongoDB Atlas, which unifies fast, scalable data access to structured content through Atlas Search and Atlas Vector Search. Looking ahead, by integrating with Voyage AI’s industry-leading embedding models, MongoDB Atlas aims to make semantic search and <a href="https://www.mongodb.com/docs/atlas/atlas-vector-search/rag/">retrieval-augmented generation</a> (RAG) applications even more accurate and reliable. While currently in private preview, this enhancement signals a promising future for developers building intelligent, AI-powered experiences.</p>
<p><a href="https://pureinsights.com/search-application-consulting/mongodb-consulting-and-implementation-services/" target="_blank">Pureinsights</a> handles the orchestration layer. Their Discovery Platform ingests and enriches content, blends keyword, vector, and generative search into a seamless UI, and integrates with large language models like GPT-4. The platform supports multilingual capabilities, easy deployment, and enterprise-grade scalability out of the box. While generative answers are powered by integrated <a href="https://www.mongodb.com/resources/basics/artificial-intelligence/large-language-models">large language models</a> (LLMs) and may vary by deployment, the solution is enterprise-ready, cloud-native, and built to scale.</p>
<h2>Bringing intelligent discovery to your own data</h2>
<p><a href="https://www.youtube.com/watch?v=GJD9iQaL0JE&utm_source=mongodb-blog&utm_medium=referral&utm_campaign=genai-discovery-demo" target="_blank">Watch the demo video</a> to see AI-powered search in action across 4,000+ pages of MongoDB content—from community forums and blog posts to technical documentation.</p>
<p>While the demo features MongoDB’s content, the solution is built to adapt. You can bring the same AI-powered experience to your internal knowledge base, customer support portal, or developer hub—no need to build from scratch.</p>
<div class="callout">
<p><b>Visit our <a href="https://cloud.mongodb.com/ecosystem/pureinsights">partner page</a> to learn more about MongoDB and Pureinsights and how we’re helping enterprises build smarter, AI-powered search experiences. <a href="https://pureinsights.com/demo-request/?utm_source=mongodb-blog&utm_medium=referral&utm_campaign=genai-discovery-demo" target="_blank">Apply for a free gen AI demo</a> using your enterprise content.</b></p>
</div>	]]></description>
      <pubDate>Wed, 01 Oct 2025 14:00:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/innovation/smarter-ai-search-powered-by-atlas-pureinsights</link>
      <guid>https://www.mongodb.com/company/blog/innovation/smarter-ai-search-powered-by-atlas-pureinsights</guid>
    </item><item>
      <title>Top Considerations When Choosing a Hybrid Search Solution</title>
      <description><![CDATA[<p>Search has evolved. Today, natural language queries have largely replaced simple keyword searches when addressing our information needs. Instead of typing “Peru travel guide” into a search engine, we now ask a <a href="https://www.mongodb.com/resources/basics/artificial-intelligence/large-language-models">large language model</a> (LLM) “Where should I visit in Peru in December during a 10-day trip? Create a travel guide.”</p>
<p>Is keyword search no longer useful? While the rise of LLMs and vector search may suggest that traditional keyword search is becoming less prevalent, the future of search actually relies on effectively combining both methods. This is where <a href="https://www.mongodb.com/company/blog/product-release-announcements/boost-search-relevance-mongodb-atlas-native-hybrid-search">hybrid search</a> plays a crucial role, blending the precision of traditional text search with the powerful contextual understanding of vector search. Despite advances in vector technology, keyword search still has a lot to contribute and remains essential to meeting current user expectations.</p>
<h2>The rise of hybrid search</h2>
<p>By late 2022 and particularly throughout 2023, as vector search saw a surge in popularity (see image 1 below), it quickly became clear that vector embeddings alone were not enough. Even as embedding models continue to improve at retrieval tasks, full-text search will always remain useful for identifying tokens outside the training corpus of an embedding model. That is why users soon began to combine vector search with lexical search, exploring ways to leverage both precision and context-aware retrieval. This shift was driven in large part by the rise of generative AI use cases like <a href="https://www.mongodb.com/resources/basics/artificial-intelligence/retrieval-augmented-generation">retrieval-augmented generation</a> (RAG), where high-quality retrieval is essential.</p>
<center><caption><b>Figure 1.</b> Number of vector search vendors per year and type. </center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-09-30 at 6.55.58 AM-nkweizza4d.png" alt="Bar graph displaying the number of vector search vendors each year. The number has increased significantly each year since 2019." title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<p>As hybrid search matured beyond basic score combination, the main fusion techniques emerged - reciprocal rank fusion (RRF) and relative score fusion (RSF). They offer ways to combine results that do not rely on directly comparable score scales. RRF focuses on ranking position, rewarding documents that consistently appear near the top across different retrieval methods. RSF, on the other hand, works directly with raw scores from different sources of relevance, using normalization to minimize outliers and align modalities effectively at a more granular level than rank alone can provide. Both approaches quickly gained traction and have become standard techniques in the market.</p>
<h2>How did the market react?</h2>
<p>The industry realized the need to introduce hybrid search capabilities, which brought different challenges for different types of players.</p>
<p>For lexical-first search platforms, the main challenge was to add vector search features and implement the bridging logic with their existing keyword search infrastructure. These vendors understood that the true value of hybrid search emerges when both modalities are independently strong, customizable, and tightly integrated.</p>
<p>On the other hand, vector-first search platforms faced the challenge of adding lexical search. Implementing lexical search through traditional inverted indexes was often too costly due to storage differences, increased query complexity, and architectural overhead. Many adopted sparse vectors, which represent keyword importance in a way similar to traditional term-frequency methods used in lexical search. Sparse vectors were key for vector-first databases in enabling a fast integration of lexical capabilities without overhauling the core architecture.</p>
<p>Hybrid search soon became table stakes and the industry focus shifted toward improving developer efficiency and simplifying integration. This led to a growing trend of vendors building native hybrid search functions directly into their platforms. By offering out-of-the-box support to combine and manage both search types, the delivery of powerful search experiences was accelerated.</p>
<p>As hybrid search became the new baseline, more sophisticated re-ranking approaches emerged. Techniques like cross-encoders, learning-to-rank models, and dynamic scoring profiles began to play a larger role, providing systems with additional alternatives to capture nuanced user intent. These methods complement hybrid search by refining the result order based on deeper semantic understanding.</p>
<h2>What to choose? Lexical-first or vector-first solutions? Top considerations when choosing a hybrid search solution</h2>
<p>When choosing how to implement hybrid search, your existing infrastructure plays a major role in the decision. For users working within a vector-first database, leveraging their lexical capabilities without rethinking the architecture is often enough. However, if the lexical search requirements are advanced, commonly the optimal solution is served with a traditional lexical search solution coupled with vector search, like MongoDB. Traditional lexical  - or lexical-first - search offers greater flexibility and customization for keyword search, and when combined with vectors, provides a more powerful and accurate hybrid search experience.</p>
<center><caption><b>Figure 2.</b> Vector-first vs Lexical-first systems: Hybrid search evaluation.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-09-30 at 7.12.58 AM-xcff4zgcxo.png" alt="Table providing an evaluation of vector-first and lexical-first systems. For the column for control, vector is marked as low and lexical is high. For complexity, vector is low and lexical is medium. For Flexibility, vector is medium and lexical is high. And finally, for keyword capabilities, vector is low and lexical is high." title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<p>Indexing strategy is another factor to consider. When setting up hybrid search, users can either keep keyword and vector data in separate indexes or combine them into one. Separate indexes give more freedom to tweak each search type, scale them differently, and experiment with scoring. The compromise is higher complexity, with two pipelines to manage and the need to normalize scores. On the other hand, a combined index is easier to manage, avoids duplicate pipelines, and can be faster since both searches run in a single pass. However, it limits flexibility to what the search engine supports and ties the scaling of keyword and vector search together. The decision is mainly a trade-off between control and simplicity.</p>
<p>Lexical-first solutions were built around inverted indexes for keyword retrieval, with vector search added later as a separate component. This often results in hybrid setups that use separate indexes. Vector-first platforms were designed for dense vector search from the start, with keyword search added as a supporting feature. These tend to use a single index for both approaches, making them simpler to manage but sometimes offering less mature keyword capabilities.</p>
<p>Lastly, a key aspect to take into account is the implementation style. Solutions with hybrid search functions handle the combination of lexical and vector search natively, removing the need for developers to manually implement it. This reduces development complexity, minimizes potential errors, and ensures that result merging and ranking are optimized by default. Built-in function support streamlines the entire implementation, allowing teams to focus on building features rather than managing infrastructure.</p>
<p>In general, lexical-first systems tend to offer stronger keyword capabilities and more flexibility in tuning each search type, while vector-first systems provide a simpler, more unified hybrid experience. The right choice depends on whether you prioritize control and mature lexical features or streamlined management with lower operational overhead.</p>
<h2>How does MongoDB do it?</h2>
<p>When vector search emerged, MongoDB added vector search indexes to the existing traditional lexical search indexes. With that, MongoDB evolved into a competitive vector database by providing developers with a unified architecture for building modern applications. The result is an enterprise-ready platform that integrates traditional lexical search indexes and vector search indexes into the core database.</p>
<p>MongoDB <a href="https://www.mongodb.com/blog/post/product-release-announcements/boost-search-relevance-mongodb-atlas-native-hybrid-search">recently released native hybrid search functions</a> to <a href="https://www.mongodb.com/products/platform/atlas-database">MongoDB Atlas</a> and as part of a public preview for use with <a href="https://www.mongodb.com/products/self-managed/community-edition">MongoDB Community Edition</a> and <a href="https://www.mongodb.com/try/download/enterprise-advanced">MongoDB Enterprise Server</a> deployments. This feature is part of MongoDB’s integrated ecosystem, where developers get an out-of-the-box hybrid search experience to enhance the accuracy of application search and RAG use cases.</p>
<p>As a result, instead of managing separate systems for different workloads, MongoDB users benefit from a single platform designed to support both operational and AI-driven use cases. As generative AI and modern applications advance, MongoDB gives organizations a flexible, AI-ready foundation that grows with them.</p>
<div class="callout">
<p><b><a href="https://www.mongodb.com/company/blog/product-release-announcements/boost-search-relevance-mongodb-atlas-native-hybrid-search">Read our blog</a> to learn more about MongoDB’s new Hybrid Search function.</b></p>
<p><b>	Visit the <a href="https://www.mongodb.com/resources/use-cases/artificial-intelligence">MongoDB AI Learning Hub</a> to learn more about building AI applications with MongoDB.</b></p>
</div>	]]></description>
      <pubDate>Tue, 30 Sep 2025 15:00:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/technical/top-considerations-when-choosing-hybrid-search-solution</link>
      <guid>https://www.mongodb.com/company/blog/technical/top-considerations-when-choosing-hybrid-search-solution</guid>
    </item><item>
      <title>Charting a New Course for SaaS Security: Why MongoDB Helped Build the SSCF</title>
      <description><![CDATA[<p>The way companies everywhere work is powered by SaaS. From collaboration tools to critical infrastructure, organizations rely on SaaS applications to drive their business forward. But this widespread adoption has created a significant security blind spot. How can you ensure every one of these applications is configured securely when they all offer different settings, capabilities, and levels of visibility?</p>
<p>This inconsistency creates friction, wastes resources, and ultimately, exposes businesses to unnecessary risk.</p>
<p>At MongoDB, we believe that securing the SaaS ecosystem is a shared responsibility. That's why we were proud to collaborate with the Cloud Security Alliance (CSA) and industry leaders like GuidePoint Security to develop a new standard—the <b>SaaS Security Capability Framework (SSCF)</b>.</p>
<h2>The problem: A gap in cloud security</h2>
<p>For years, the majority of security assessments have focused on the SaaS provider's organizational security, often through frameworks like SOC 2 or ISO 27001. While essential, these frameworks don't always address a critical question: what security capabilities are available to the SaaS customer within the application?</p>
<p>This gap means that security teams face a chaotic landscape. Every new SaaS app brings a different set of configurable controls for logging, identity management, and data access. This makes it nearly impossible to implement and track consistent security policies at scale, leading to a burdensome assessment process for everyone involved.</p>
<h2>The solution: A common framework for SaaS security</h2>
<p>The SSCF was created to solve this problem by establishing a clear, technical set of customer-facing security controls that SaaS vendors should provide. The framework is designed to empower customers by ensuring they have the tools they need to operate applications securely at scale on their side of the Shared Security Responsibility Model (SSRM).</p>
<p>The framework helps with many use cases, but three key audiences stand out:</p>
<ul>
<li>
<p><b>For risk management teams:</b> The SSCF provides a clear baseline to use during vendor assessments, simplifying procurement.</p>
</li>
<li>
<p><b>For SaaS security teams:</b> It offers a checklist for implementing the security features enterprises expect, streamlining the security program.</p>
</li>
<li>
<p><b>For SaaS vendors:</b> The SSCF standardizes assessment responses, reducing the overhead of custom questionnaires and helping vendors meet customer requirements.</p>
</li>
</ul>
<p>The SSCF focuses on six critical domains, aligned with CSA’s Cloud Control Matrix, providing specific and actionable controls for each:</p>
<ol>
<li>
<p><b>Change Control and Configuration Management (CCC):</b> Ensuring you can programmatically query and get documentation on all security configurations.</p>
</li>
<li>
<p><b>Data Security and Privacy Lifecycle Management (DSP):</b> Giving customers control over features like disabling file uploads to prevent malicious code.</p>
</li>
<li>
<p><b>Identity and Access Management (IAM):</b> Providing robust, modern controls for user access, including SSO enforcement, non-human identity (NHI) governance, and a dedicated read-only security auditor role.</p>
</li>
<li>
<p><b>Interoperability and Portability (IPY):</b> Giving administrators control over mass data exports and visibility into application integrations.</p>
</li>
<li>
<p><b>Logging and Monitoring (LOG):</b> Defining a clear set of comprehensive requirements for machine-readable logs with mandatory fields for effective threat detection and forensics.</p>
</li>
<li>
<p><b>Security Incident Management (SEF):</b> Requiring a simple, effective way for vendors to notify a designated customer security contact during an incident.</p>
</li>
</ol>
<h2>MongoDB's commitment to a more secure ecosystem</h2>
<p>Our involvement in creating the SSCF stems from our deep commitment to the security of our customers' data and the broader developer community. We believe that robust security shouldn't be an afterthought; it must be built in and easy to consume. The principles outlined in the SSCF—like strong identity controls and comprehensive logging—are philosophies we already built into our own data platform.</p>
<p>Strong security capabilities allow our customers to build and innovate faster and more securely, knowing they have a reliable foundation. And personally, as a co-chair of the CSA SSCF, I’ve seen great excitement and engagement on the part of our working group—which helped me realize how many companies are affected by this lack of consistency.</p>
<p>The SSCF is a vital step toward creating a more trusted, efficient, and secure global SaaS ecosystem. We are thrilled to have been a part of this foundational work and will continue to champion this standard that empowers developers and security teams alike.</p>
<div class="callout">
<p><b>Visit our <a href="https://www.mongodb.com/products/capabilities/security">security page</a> to learn more about how MongoDB helps protect your data. </b></p>
</div>	]]></description>
      <pubDate>Tue, 30 Sep 2025 13:00:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/news/charting-new-course-saas-security-why-mongodb-helped-build-sscf</link>
      <guid>https://www.mongodb.com/company/blog/news/charting-new-course-saas-security-why-mongodb-helped-build-sscf</guid>
    </item><item>
      <title>Carrying Complexity, Delivering Agility</title>
      <description><![CDATA[<h2>Resilience, intelligence, and simplicity: The pillars of MongoDB’s engineering vision for innovating at scale</h2>
<p>We’re relatively new to MongoDB—<a href="https://www.mongodb.com/company/blog/from-niche-nosql-enterprise-powerhouse-story-mongodbs-evolution">Ashish joined two years ago via the Granite acquisition</a> after a decade-plus building Google’s databases and distributed systems, and Akshat joined in June 2024 after 15 years building databases at AWS. We have a shared obsession with distributed systems. We’d seen how much developers loved MongoDB, which is part of the reason we joined the company—MongoDB is one of the most loved databases in the world. So one of the first things we sought to understand was why.</p>
<p>It turned out to be simpler than we thought: MongoDB’s vision is to get developers to production fast. This means making it easy to start, and easier to keep going—one command spin-up, sane defaults for day one, and zero downtime upgrades and zero downtime expansion to multiple clouds as you scale. That’s what developer agility looks like in practice: the ability to choose the best tools, move quickly, and to trust the system to carry the weight of failure, complexity, and change.</p>
<p>At MongoDB, three principles drive that vision: resilience, intelligence, and simplicity.</p>
<p>Resilience is the ability to keep going when something breaks, intelligence is the ability to adapt to changing conditions, and simplicity is reducing cognitive and operational load so users and operators can move quickly and safely. These are not just technical goals—we treat them as non-negotiable design constraints. So if a change widens blast radius, breaks adaptive performance, or adds operator toil, it doesn’t ship.</p>
<p>In this post, we share the key engineering themes shaping our work and the mechanisms that keep us honest.</p>
<h2>Security as a first principle</h2>
<p><a href="https://www.mongodb.com/products/capabilities/security">Security</a> isn't a wall you build around your data. It's an assumption you design against from the very beginning. The assumption is simple: in a distributed system, you can’t trust the network, you can’t trust the hardware, and you certainly can't trust your neighbors.</p>
<p>This starts with architectural isolation. In most cloud database service offerings, you're sharing walls with strangers. Shared walls hurt performance, they leak failures, and sometimes they leak secrets. We minimize shared walls, and where utilities must be shared, we build firebreaks. Stronger isolation reduces the blast radius of mistakes and attacks.
With a <a href="https://www.mongodb.com/products/platform/atlas-database">MongoDB Atlas</a> dedicated cluster, you get the whole building. Your cluster runs on its own provisioned servers, in its own private network (VPC). Your unencrypted data is never available in a shared VM or process. There are no &quot;noisy neighbors&quot; because you have no neighbors. The attack surface shrinks dramatically, and resource contention disappears. The blast radius of a problem elsewhere stops at your door. In other words, we follow an anti-Vegas principle—what happens outside your cluster will stay outside.</p>
<p>But true security is layered. Once we’ve isolated the environment, we defend it from the inside out. We start by asking the hard questions:</p>
<ul>
	<font size="4">
		<li>Who are you? That's strong authentication, from SCRAM to AWS IAM.</li>
		<li>What can you do? That's fine-grained RBAC, enforcing the principle of least privilege.</li>
		<li>What if someone gets in? That's encryption everywhere—in transit, at rest, and even in use with <a href="https://www.mongodb.com/docs/mongoid/current/security/encryption/">Client-Side Field Level Encryption</a>.</li>
		<li>How do we lock down the roads? That’s network controls like IP access lists and private endpoints.</li>
		<li>And how do we prove it? That's granular auditing for a clear, immutable trail.</li>
	</font>
</ul>	
<p>Every one of these layers reflects defense in depth.</p>
<center><caption><b>Figure 1.</b> Queryable Encryption.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-09-24 at 11.56.40 AM-ev2zptr309.png" alt="This diagram breaks down how queryable encryption works. The query is sent from the authenticated application to the MongoDB Driver. The driver then assesses the customer provisioned encryption key to ensure permissions are correct. The encrypted query ten connects to the MongoDB Database, which pulls the encrypted data. This data is then pulled back through the MongoDB driver and then decrypted before reaching the user. " title=" " style="width: 800px" border="1"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<p>The history of database security is full of trade-offs between safety and functionality. For decades, the trade-off has been brutal: to run a query, you had to decrypt your data on the server, exposing it to risk. <a href="https://www.mongodb.com/docs/v7.0/core/queryable-encryption/">Queryable Encryption</a>—an industry-first searchable encryption scheme developed by MongoDB Research—breaks this paradigm. It allows your application to run expressive queries, including equality and range checks on data that remains fully encrypted on the server. The decryption keys never leave your client. The server maintains encrypted indexes for the fields you wish to query on, and queries can be done entirely on the encrypted data, maintaining the strongest privacy and security of your sensitive data.</p>
<p>By carrying these defenses in the platform itself, security stops being another burden developers have to design around. They get the <a href="https://www.mongodb.com/products/platform/trust">privacy guarantees</a>, the audit trails, and the <a href="https://www.mongodb.com/products/platform/trust">compliance</a>, without sacrificing functionality or velocity.</p>
<h2>Achieving resilience: Architecture, operations, and proof</h2>
<p>Systems don’t live in a vacuum. They live in messy realities: network partitions, power outages, kernel panics, cloud control plane hiccups, operator mistakes. The measure of resilience is not “will it fail?” but “what happens next?” Resilience is the ability to keep going when the thing you depend on stops working, not because you planned for it to fail, but because you planned for it to recover.</p>
<p>Here’s how we achieve resilience.</p>
<p><b>Architecture:</b> MongoDB Atlas is built on the assumption that something may fail at any time. Every cluster starts life as a replica set, spread across independent availability zones. That’s the default, not an upgrade. The moment a primary becomes unreachable, an election happens. Within seconds, another node takes over, clients reconnect, and in-flight writes retry automatically. Single-zone diversity buys you protection against a data center outage. Adding more regions buys you protection against a full region failure. Adding more cloud providers buys you insulation against provider-wide events. Each step up that ladder buys you more protection against bigger failures. The trade-off is that each step adds more moving parts to manage, and the failure modes evolve: intra-region links are fast; cross-region introduce wide, lossy links; cross-cloud adds different fabrics, load balancers, and failure semantics.</p>
<center><caption><b>Figure 2.</b> Resilience options: Single zone, multi-AZ, multi-region, multi-cloud.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-09-25 at 7.43.20 AM-mbvfuz44wi.png" alt="Diagram showing an example of how multi-region, multi-cloud support would work." title=" " style="width: 800px" border="1"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<p>Our job is to make any type of failures (node failures, link failures, gray failures) invisible to you. Writes are only committed when a majority of voting members have the entry in the same term. That rule sounds small, but it’s the safety net that prevents a primary stranded on the wrong side of a partition from accepting writes it can’t keep. Heartbeats and UpdatePosition messages carry progress and truth; if a node learns of a higher term, it steps down immediately. When elections happen, the new primary doesn’t open for writers until it has caught up to the latest known state, preserving as many uncommitted writers as possible. Secondaries apply operations as they arrive, even over lossy links.</p>
<p><b>Operating discipline:</b> Resilience isn’t just in the code and architecture, it’s in how you operate it every day. Even the best design will fail without the discipline to detect problems early and recover quickly. You need to embed it in how you operate. Operational excellence is about preventing avoidable failures, detecting the ones you can’t prevent, and recovering quickly when they happen.</p>
<p>And we’ve turned that into a discipline. Every week, the people closest to the work—engineers, on-calls, product managers, and leaders—step out of the day’s firefight to review the system with rigor. We celebrate the small wins that quietly make the system safer. We dig into failures to understand not just what happened, but how to make sure it doesn’t happen again anywhere. The goal isn’t perfection. Instead, it’s building a system where every lesson learned and every fix made raises the floor for everyone. A single automation can remove a whole category of incidents. A well-written postmortem can stop the same mistake from happening across dozens of systems. The return isn’t linear—it compounds.</p>
<center><caption><b>Figure 3.</b> The ops excellence flywheel.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-09-25 at 7.46.34 AM-81rd3em1y9.png" alt="Circular diagram for the ops excellence. The names around the diagram are prevent, detect, recover, learn, and improve. " title=" " style="width: 800px" border="1"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<p>When resilience works, failure stops being something every developer has to carry in their head. The system absorbs it, recovers, and lets them keep moving.</p>
<p><b>Proof before shipping:</b> Testing tells you that your code works in the cases you’ve thought to test. Formal verification tells you whether it works in all the cases that matter, even the ones you didn’t think to test. MongoDB is among the few cloud databases that apply and publish formal methods on the core database paths. This rigor translates into agility; teams using the database ship products without worrying about node failures, failovers, or clock skew, causing edge cases. Those edge cases in the database have already been explored, proven, and designed against.</p>
<center><caption><b>Figure 4.</b> Formal methods.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-09-25 at 7.49.18 AM-7hkbxraoa7.png" alt=" " title=" " style="width: 800px" border="1"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<p>When we design a new replication or failover protocol, we don’t just code it, run a few chaos tests, and ship it. We build a mathematical model of the core logic stripped of distracting details like disk format or thread pools and ask a model checker to try every possible interleaving of events. The tool doesn’t skip the “unlikely” cases. It tries them all.</p>
<p>Take <a href="https://arxiv.org/pdf/2102.11960" target="_blank">logless reconfiguration</a>. The idea is simple: MongoDB decouples configuration changes from the data replication log, so membership changes no longer queue behind user writes. But while the idea is simple, the implementation is not. Without care, concurrent configs can fork the cluster, primaries can be elected on stale terms, or new majorities can lose the old majority’s writes. We modeled the protocol in TLA+, explored millions of interleavings, and distilled the solution down to four invariants: terms block stale primaries, monotonic versions prevent forks, majority votes stop minority splits, and the oplog-commit rule ensures durability carries forward.</p>
<p>For <a href="https://www.vldb.org/pvldb/vol18/p5045-schultz.pdf" target="_blank">transactions</a>, we developed a modular formal specification of the multi-shard protocol in TLA+ to verify protocol correctness and snapshot isolation, defined and tested the WiredTiger storage interface with automated model-based techniques, and analyzed permissiveness to assess how well concurrency is maximized within the isolation level.</p>
<p>These models are not giant, perfect representations of the whole system. They’re small, precise abstractions that focus on the essence of correctness. The payoff is simple: the model checker explores more corner cases in minutes than a human tester could in years.</p>
<p>Alongside formal proofs, we use additional tools to test the implementation under deterministic simulation: fuzzing, fault injection, and message reordering against real binaries. Determinism gives us one-click bug replication, CI/CD regression gates, and reliable incident replays—o rare timing bugs become easy fixes.</p>
<h2>Mastering the multi-cloud reality with simple abstractions</h2>
<p>Developer agility isn’t about having a hundred choices on a menu; it's about removing the friction that makes real choice impossible. One such choice that almost never materializes in practice is multi-cloud. We achieve multi-cloud by building a unified data fabric that lets you put your data anywhere you need it, controlled from a single place. A DIY multi-cloud database where you run self-managed MongoDB across AWS, Microsoft Azure, and Google Cloud seems simple on paper. In practice, it involves weeks of networking (VPC/VNet peering, routing, and firewall rules) and brittle scripts. The theoretical agility that you got by going multi-cloud collapses under the weight of operational reality.</p>
<center><caption><b>Figure 5.</b> Multi-cloud replica sets with MongoDB.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-09-25 at 7.52.08 AM-veo3fac5hp.png" alt="Diagram that is a map of the world with different data centers highlighted, showcasing the idea of multi-cloud replica sets." title=" " style="width: 800px" border="1"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<p>Now contrast this with MongoDB Atlas, where you don’t have to manually orchestrate provisioning across three different cloud APIs. A single replica set can span AWS, Google Cloud, and Azure. Provisioning, networking, and failover are handled for you. Your app connects with a standard mongodb+srv string, and our intelligent drivers ensure that if your AWS primary fails, traffic automatically fails over to a new primary in GCP or Azure without any changes to your code. This transforms an operational nightmare into a simple deployment choice, giving you freedom from vendor lock-in and a robust defense against provider-wide outages.</p>
<p>Agility also means precise data placement for data sovereignty and global latency. Global Clusters and Zone sharding let you describe simple rules so data stays where policy requires and users are served locally, e.g., A rule to map &quot;DE&quot;, &quot;FR&quot;, and &quot;ES&quot; to the EU_Zone can guarantee that all European customer data and order history physically reside within European borders, satisfying strict GDPR requirements out of the box. Because Zone Sharding is built into the core sharding system, you can add or adjust placement without app rewrites. That’s real agility: the platform removes the hard parts, so the choices are real.</p>
<h2>From data to intelligence: Building the next generation of AI-powered applications</h2>
<p>Building intelligent AI-powered features has been a complex and fragmented process. The traditional approach forced developers to maintain separate vector databases for semantic search, creating brittle ETL pipelines to shuttle data back and forth from their primary operational database. This introduced architectural complexity, latency, and a higher total cost of ownership. That’s not agility. That’s friction.</p>
<p>Our approach is to eliminate this friction entirely. We believe the best place to build AI-powered applications is directly on your operational data. This is the vision behind MongoDB Atlas Vector Search. Instead of creating a separate product, we integrated vector search capabilities directly into the MongoDB query engine. This is a profound simplification for developers. You can now perform semantic search—finding results based on meaning and context, not just keywords—using the same <a href="https://www.mongodb.com/products/tools/mongodb-query-api">MongoDB Query API</a> (MQL) and drivers you already know. There are no new systems to learn and no data to synchronize. You can seamlessly combine vector search with traditional filters, aggregations, and updates in a single, expressive query. This dramatically accelerates the development of modern features like RAG (<a href="https://www.mongodb.com/resources/basics/artificial-intelligence/retrieval-augmented-generation">retrieval-augmented generation</a>) for chatbots, sophisticated recommendation engines, and intelligent search experiences. Intelligence isn’t something you bolt on. It’s something you build on.</p>
<p>This is an area where we continue to make multiple enhancements. For example, with the acquisition of <a href="https://www.mongodb.com/products/platform/ai-search-and-retrieval">Voyage AI</a> earlier this year, we are making progress towards integrating Voyage's embedding and reranking models into Atlas to deliver a <a href="https://www.mongodb.com/company/blog/engineering/rethinking-information-retrieval-mongodb-with-voyage-ai">truly native experience</a>. We are also actively applying AI toward our <a href="https://www.mongodb.com/resources/basics/application-modernization">Application Modernization</a> efforts. Consider a relational database application that involves pages of SQL statements representing a view or a query. How do you translate it so it can work effectively with MongoDB’s MQL? LLMs have advanced enough to provide a base version that may be mostly the correct shape, but to get it accurate and performant requires building additional tooling. We are actively working with several customers, not only on the SQL → MQL translation, but also on modernizing their application code using similar techniques.</p>
<h2>What’s next?</h2>
<p>We’ll keep pushing on the same three levers: resilience, intelligence, and simplicity. Keep watching this space. We’ll publish deep dives similar to our <a href="https://www.mongodb.com/company/blog/technical/rapid-prototyping-safe-logless-reconfiguration-protocol-mongodb-tla-plus">TLA+ write-up on logless reconfiguration</a>, covering formal methods and other behind-the-scenes work on hard engineering problems, such as <a href="https://www.mongodb.com/company/blog/mongodb-8-0-improving-performance-avoiding-regressions">MongoDB 8.0 performance improvement challenges</a>. Our vision is to carry the complexity so developers don’t have to—and to give them the agility &amp; freedom to build the next generation of intelligent applications wherever they want.</p>
<div class="callout">
<p><b>For more on how MongoDB went from a “niche” NoSQL database to a powerhouse with the high availability, tunable consistency, ACID transactions, and robust security that enterprises demand, <a href="https://www.mongodb.com/company/blog/from-niche-nosql-enterprise-powerhouse-story-mongodbs-evolution">check out the MongoDB blog</a>.</b></p>
</div>	]]></description>
      <pubDate>Thu, 25 Sep 2025 16:15:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/engineering/carrying-complexity-delivering-agility</link>
      <guid>https://www.mongodb.com/company/blog/engineering/carrying-complexity-delivering-agility</guid>
    </item><item>
      <title>From Niche NoSQL to Enterprise Powerhouse: The Story of MongoDB&#39;s Evolution</title>
      <description><![CDATA[<p>I joined MongoDB two years ago through the acquisition of Grainite, a database startup I co-founded. My journey here is built on a long career in databases, including many years at Google, where I was most recently responsible for the company’s entire suite of native databases—Bigtable, Spanner, Datastore, and Firestore—powering both Google's own products and Google Cloud customers. My passion has always been large-scale distributed systems, and I find that the database space offers the most exciting and complex challenges to solve.</p>
<p>At MongoDB my focus is on architectural improvements across the product stack. I've been impressed with the progression of MongoDB's capabilities and the team's continuous innovation ethos.</p>
<p>In this blog post, I’ll share some of my understanding of MongoDB’s history and how MongoDB became the de facto standard for document databases. I’ll also highlight select innovations we are actively exploring.</p>
<h2>The dawn of NoSQL</h2>
<p>During the &quot;move fast and break things&quot; era of Web 2.0, the digital landscape was exploding. Developers were building dynamic, data-rich applications at an unprecedented pace, and the rigid, tabular structures of legacy relational databases like Oracle and Microsoft SQL Server quickly became a bottleneck. A new approach was needed, one that prioritized developer productivity, flexibility, and massive scale. At the same time, <a href="https://www.mongodb.com/resources/basics/json-and-bson">JSON's</a> popularity as a flexible, cross-language format for communicating between browsers and backends was surging. This collective shift toward flexibility gave rise to <a href="https://www.mongodb.com/resources/basics/databases/nosql-explained">NoSQL databases</a>, and MongoDB, with its native document-based approach, was at the forefront of the movement.</p>
<p>In the early days, there was a perception that MongoDB was great for use cases like social media feeds or product catalogs, but not for enterprise applications where data integrity is non-negotiable—like financial transactions. This view was never perfectly accurate, and it certainly isn't today. So, what created this perception? It came down to two main factors: categorization and maturity.</p>
<p>First, most early NoSQL databases were built on an “eventually consistent” model, prioritizing Availability and Partition Tolerance (AP) under the <a href="https://en.wikipedia.org/wiki/CAP_theorem" target="_blank">CAP theorem</a>. MongoDB was an exception, designed to prioritize Consistency and Partition Tolerance (CP). But, in a market dominated by AP systems, MongoDB was often lumped in with the rest, leading to the imprecise label of having “light consistency.” Second, all new databases take time to mature for mission-critical workloads. Any established system-of-record database today has gone through many versions over many years to earn that trust. After more than 15 years of focused engineering, today MongoDB has the required codebase maturity, features, and proven track record for the most demanding enterprise applications.</p>
<p>The results speak for themselves. As our CEO Dev Ittycheria mentioned during the <a href="https://investors.mongodb.com/financial-information/quarterly-results">Q2 2026 earnings</a> call, over 70% of the Fortune 100—as well as 7 of the 10 largest banks, 14 of the 15 largest healthcare companies, and 9 of the 10 largest manufacturers globally—are MongoDB customers. This widespread adoption by the world's most sophisticated organizations is a testament to a multi-year, deliberate engineering journey that has systematically addressed the core requirements of enterprise-grade systems.</p>
<h2>MongoDB’s engineering journey: Building a foundation of trust</h2>
<p>MongoDB’s evolution from being perceived as a niche database to an enterprise powerhouse wasn't an accident; it was the result of a relentless focus on addressing the core requirements of enterprise-grade systems. Improvements instrumental to this transformation include:</p>
<ul>
	<font size="4">
		<li><b><a href="https://www.mongodb.com/resources/basics/high-availability">High availability</a> with replica sets:</b> The first step was eliminating single points of failure. Replica sets were introduced as self-healing clusters that provide automatic failover, ensuring constant uptime and data redundancy. Later, the introduction of a Raft-style consensus protocol provided even more reliable and faster failover and leader elections, especially in the event of a network partition. This architecture is the foundation for MongoDB’s current multi-region or run-anywhere deployments, and even allows a single replica set to span multiple cloud providers for maximum resilience.</li>
	</font>
</ul>	
<center><caption><b>Figure 1.</b> Horizontal scaling.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-09-24 at 11.52.47 AM-hvu9nl4a7c.png" alt="Diagram showing horizontal scaling. This starts at the top with the application, which connects to the mongos/router. From here, data is then sharded to the config shard, shard b, and shard c. " title=" " style="width: 800px" border="1"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<ul>
	<font size="4">
		<li><b>Massive scalability with horizontal sharding:</b> Introduced at the same time as replica sets, <a href="https://www.mongodb.com/resources/products/capabilities/sharding">sharding</a> is a native, foundational part of MongoDB. MongoDB built sharding to allow data to be partitioned across multiple servers, enabling virtually limitless horizontal scaling to support massive datasets and high-throughput operations. Advanced features like zone sharding further empower global applications by pinning data to specific geographic locations to reduce latency and comply with data residency laws like GDPR.</li>
		<li><b>Tunable consistency:</b> Recognizing that not all data is created equal, MongoDB empowered developers with tunable read and write concerns. Within a single application, some data—like a 'page view count'—might not have the same consistency requirements as a 'order checkout value'. Instead of using separate, specialized databases for each use case, developers can use MongoDB for both. This moved the platform beyond a one-size-fits-all model, allowing teams to choose the precise level of consistency their application required per operation—from "fire and forget" for speed to fully acknowledged writes across a majority of replicas for guaranteed durability. This flexibility provides the best price/performance tradeoffs for modern applications.</li>
		<li><b>The game-changer, multi-document <a href="https://www.mongodb.com/resources/basics/databases/acid-transactions">ACID transactions</a>:</b> From its inception, MongoDB has always provided atomic operations for single documents. The game-changing moment was the introduction of multi-document ACID transactions in 2018 with MongoDB 4.0, which was arguably the single most important development in its history. This feature, later extended to include sharded clusters, meant that complex operations involving multiple documents—like a financial transfer between two accounts—could be executed with the same atomicity, consistency, isolation, and durability (ACID) guarantees as a traditional relational database. This milestone shattered the biggest barrier to adoption for transactional applications. And the <a href="https://www.mongodb.com/docs/manual/release-notes/8.2/">recently released MongoDB 8.2</a> is the most feature-rich and performant version of MongoDB yet. </li>
	</font>
</ul>
<ul>
	<font size="4">
		<li>Strict security and compliance: To meet the stringent security demands of the enterprise, MongoDB layered in a suite of advanced security controls. Features like Role-Based Access Control (RBAC), detailed auditing, and Field-Level Encryption were just the beginning. The release of Queryable Encryption (<a href="https://www.mongodb.com/company/blog/product-release-announcements/queryable-encryption-expands-search-power">to which we recently introduced support for prefix, suffix, and substring queries</a>) marked a revolutionary breakthrough, allowing non-deterministic encrypted data to be queried without ever decrypting it on the server, ensuring data remains confidential even from the database administrator. To provide independent validation, MongoDB Atlas has achieved a number of internationally recognized security certifications and attestations, including <b>ISO/IEC 27001</b>, <b>SOC 2 Type II</b>, <b>PCI DSS</b>, and <b>HIPAA</b> compliance, demonstrating a commitment to meeting the rigorous standards of the world's most regulated industries.</li>
	</font>
</ul>
<center><caption><b>Figure 2.</b> Queryable Encryption.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-09-24 at 11.56.40 AM-ev2zptr309.png" alt="This diagram breaks down how queryable encryption works. The query is sent from the authenticated application to the MongoDB Driver. The driver then assesses the customer provisioned encryption key to ensure permissions are correct. The encrypted query ten connects to the MongoDB Database, which pulls the encrypted data. This data is then pulled back through the MongoDB driver and then decrypted before reaching the user. " title=" " style="width: 800px" border="1"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<p>The ultimate proof of enterprise readiness lies in real-world adoption. Today, MongoDB is trusted by leading organizations across the most demanding sectors to run their core business systems.</p>
<p>For example, <a href="https://www.mongodb.com/solutions/customer-case-studies/citizens-bank">Citizens Bank</a>, one of the oldest and largest financial institutions in the United States, moved to modernize its fraud detection capabilities from a slow, batch-oriented legacy system. They built a new, comprehensive fraud management platform on MongoDB Atlas that allows for near real-time monitoring of transactions.</p>
<p>This use case in a highly regulated industry requires high availability, low latency, and strong consistency to analyze transactions in real-time and prevent financial loss—a direct refutation of the old &quot;eventual consistency&quot; criticism.</p>
<p>Another example is that of <a href="https://www.mongodb.com/solutions/customer-case-studies/bosch">Bosch Digital</a>, the software and systems house for the Bosch Group. Bosch Digital uses MongoDB for its IoT platform, Bosch IoT Insights, to manage and analyze massive volumes of data from connected devices—from power tools used in aircraft manufacturing, to sensors in vehicles. IoT data arrives at high speeds, in huge volumes, and in variable structures. This mission-critical use case demonstrates MongoDB's ability to handle the demands of industrial-scale IoT, providing the real-time analytics needed to ensure quality, prevent errors, and drive innovation.</p>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Technical_ACTION_Rocket_Spot-smhepwsxn5.png" alt=" " title=" " style="width: 400px"/>
</div>
</figure>
<p>Then there’s <a href="https://www.mongodb.com/solutions/customer-case-studies/coinbase">Coinbase</a>, which relies on MongoDB to seamlessly handle the volatile and unpredictable cryptocurrency market. Specifically, Coinbase architected a MongoDB Atlas solution that would accelerate scaling for large clusters. The result was that Coinbase end-users gained a more seamless experience. Previously, traffic spikes could impact some parts of the Coinbase app. Now, users don’t even notice changes happening behind the scenes.</p>
<p>These are just a few examples; customers across all verticals, industries, and sizes depend on MongoDB for their most demanding production use cases. A common theme is that real-world data is messy, variable, and doesn't fit neatly into rigid, tabular structures.</p>
<p>The old adage says that if all you have is a hammer, everything looks like a nail. For decades, developers only had the relational &quot;hammer.&quot; With MongoDB, they now have a modern tool that adapts to how developers work and the data they need to manage and process.</p>
<h2>The road ahead: Continuous innovation</h2>
<p>MongoDB is not resting on its laurels. The team is as excited about what the future holds as they were when MongoDB was first launched, and we continue to innovate aggressively to meet—and anticipate—the modern enterprise’s demands. Here are select improvements we are actively exploring.</p>
<p>A critical need we hear from customers is how to support elastic workloads in a price-performant way. To address this, over the past two years we’ve rolled out Search Nodes, which is a unique capability in MongoDB that allows scaling of search and vector workloads independent from the database to improve availability and price performance.</p>
<p>We are now working closely with our most sophisticated customers to explore how to deliver similar capabilities across more of MongoDB. Our vision is to enable customers to </b>scale compute for high-throughput queries without over-provisioning storage</b>, and vice versa. We can do all this while building upon what is already one of the strongest security postures of any cloud database, as we continue to raise the bar for durability, availability, and performance.</p>
<p>Another challenge facing large enterprises is the significant cost and risk associated with modernizing legacy applications. To solve this, we are making a major strategic investment in <b>enterprise application modernization, and recently announced the <a href="https://www.mongodb.com/solutions/use-cases/modernize">MongoDB Application Modernization Platform</a></b>. We have been engaged with several large enterprises in migrating their legacy relational database applications—code, data, and everything in between—over to MongoDB. This is not a traditional, manual migration effort capped by the number of bodies assigned. Instead, we are systematically developing Agentic tooling and AI-based frameworks, techniques, and processes that allow us to smartly migrate legacy applications into modern microservices-based architectures at scale.</p>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Technical_SOFTWARE_AMP_Spot None 2x (1)-qvvvzxf9gi.png" alt=" " title=" " style="width: 400px"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<p>One of the more exciting findings from a recent effort, working with a large enterprise in the insurance sector, was that optimized queries on MongoDB ran just as fast, and often significantly faster, than on their legacy relational database, even when schemas were translated 1:1 between relational tables and MongoDB collections, and lots of nested queries and joins were involved. Batch jobs implemented as complex stored procedures that took several hours to execute on the relational database could be completed in under five minutes, thanks to the parallelism MongoDB natively enables (for more, see the <a href="https://www.mongodb.com/company/blog/technical/modernizing-core-insurance-systems-breaking-batch-bottleneck">MongoDB Developer Blog</a>).</p>
<p>Based on the incredible performance gains seen in these modernization projects, we're addressing another common need: ensuring fast queries even when data models aren't perfectly optimized. We are actively exploring improvements to our <b>Query Optimizer</b> that will improve lookup and join performance. While the document model will always be the most performant way to model your data, we are ensuring that even when you don't create the ideal denormalized data model, MongoDB will deliver performance that is at par or better than the alternatives.</p>
<p>Finally, developers today are often burdened with stitching together multiple services to build modern, AI-powered applications. To simplify this, the platform is expanding far beyond a traditional database, focused on providing a <b>unified developer experience</b>. This includes a richer ecosystem with integrated capabilities like <a href="https://www.mongodb.com/products/platform/atlas-search">Atlas Search</a> for full-text search, <a href="https://www.mongodb.com/products/platform/atlas-vector-search">Atlas Vector Search</a> for AI-powered semantic search, and native <a href="https://www.mongodb.com/products/platform/atlas-stream-processing">Stream Processing</a> to handle real-time data. We are already working on our first integrations, and continue to explore how embedding generation as a service within MongoDB Atlas, powered by our own Voyage AI models, can further simplify application development.</p>
<h2>From niche to necessity</h2>
<p>MongoDB began its journey as a (seemingly) niche NoSQL database with perceptions and tradeoffs that made it unsuitable for many core business applications. But, through a sustained and deliberate engineering effort, it has delivered the high availability, tunable consistency, ACID transactions, and robust security that enterprises demand. The perceptions of the past no longer match the reality of the present. When 7 of the 10 largest banks are already using MongoDB, isn’t it time to re-evaluate MongoDB for your most critical applications?</p>
<div class="callout">
<p><b>For more on why innovation requires a modern, AI-ready database—and why companies like Nationwide, Wells Fargo, and The Knot Worldwide chose MongoDB over relational databases—<a href="https://www.mongodb.com/resources/solutions/use-cases/innovate-and-modernize">see the MongoDB customer use case site</b>.	</b></p>
</div>	]]></description>
      <pubDate>Thu, 25 Sep 2025 16:00:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/from-niche-nosql-enterprise-powerhouse-story-mongodbs-evolution</link>
      <guid>https://www.mongodb.com/company/blog/from-niche-nosql-enterprise-powerhouse-story-mongodbs-evolution</guid>
    </item><item>
      <title>Endian Communication Systems and Information Exchange in Bytes</title>
      <description><![CDATA[<p>Imagine two people trying to exchange phone numbers. One starts from the country code and moves to the last digit, while the other begins at the last digit and works backwards. Both are technically right, but unless they agree on the direction, the number will never connect.</p>
<p>Computers face a similar challenge when they talk to each other. Deep inside processors, memory chips, and network packets, data is broken into bytes. But not every system agrees on which byte should come first. Some start with the “big end” of the number, while others begin with the “little end.”</p>
<p>This simple difference, known as endianness, quietly shapes how data is stored in memory, transmitted across networks, and interpreted by devices. Whether it’s an IoT sensor streaming temperature values, a server processing telecom call records, or a 5G base station handling billions of radio samples, the way bytes are ordered can determine whether the data makes perfect sense—or complete nonsense.</p>
<h2>What is endianness?</h2>
<p>An endian system defines the order in which bytes of a multi-byte number are arranged.</p>
<ul>
	<font size="4">
		<li><b>Big-endian:</b> The most significant byte (MSB) comes first, stored at the lowest address.</li>
		<li><b>Little-endian:</b> The least significant byte (LSB) comes first, stored at the lowest address.</li>
	</font>
</ul>
<p>For example, the number 0x12345678 would be arranged as:</p>
<ul>
	<font size="4">
		<li>Big-endian → 12 34 56 78</li>
		<li>Little-endian → 78 56 34 12</li>
	</font>
</ul>
<p>While this looks simple, the implications are huge. If one system sends data in little-endian while another expects big-endian, the values may be misread entirely. To avoid this, networking standards like IP, TCP, and UDP enforce big-endian (network byte order) as the universal convention.</p>
<h2>Industries where endianness shapes communication</h2>
<p>From the cell tower to the car dashboard, from IoT devices in our homes to high-speed trading systems, endianness is the silent agreement that keeps industries speaking the same digital language.  Endianness may sound like a low-level detail, but it silently drives reliable communication across industries.</p>
<p>In telecommunications and 5G, standards mandate big-endian formats so routers, servers, and base stations interpret control messages and packet headers consistently. IoT devices and embedded systems also depend on fixed byte order—sensors streaming temperature, pressure, or GPS data must follow a convention so cloud platforms decode values accurately. The automotive sector is another example: dozens of ECUs from different suppliers must agree on byte order to ensure that speed sensors, braking systems, and infotainment units share correct data. In finance and high-frequency trading, binary protocols demand strict endian rules—any mismatch could distort price feeds or disrupt trades. And in aerospace and defense, radar DSPs, avionics systems, and satellites require exact endian handling to process mission-critical data streams.</p>
<p>Across all these domains, endian consistency acts as an invisible handshake, ensuring that machines with different architectures can still speak the same digital language.</p>
<h2>Use case architecture: From endian to analytics</h2>
<center><caption><b>Figure 1.</b> Architecture Diagram for the flow of data.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-09-25 at 7.20.50 AM-vdebi0cs39.png" alt="This diagram is titled Endian Communication Analyser. The IOT devices connect to the Endian converter, which connects to Apache Kafka. This then flows into MongoDB Atlas, which produces the analytics." title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<p>The diagram above illustrates how low-level endian data from IoT devices can be transformed into high-value insights using a modern data pipeline.</p>
<ol>
	<font size="4">
		<li><b><a href="https://www.mongodb.com/solutions/use-cases/internet-of-things">IoT devices</a> (data sources):</b> Multiple IoT devices (e.g., sensors measuring temperature, vibration, or pressure) generate raw binary data. To remain efficient and consistent, these devices often transmit data in a specific endian format (commonly big-endian). However, not all receiving systems use the same convention, which can lead to misinterpretation if left unhandled.</li>
		<li><b>Endian converter:</b> The first processing step ensures that byte ordering is normalized. The endian converter translates raw payloads into a consistent format that downstream systems can understand. Without this step, a simple reading like 25.10°C could be misread as 52745°C—a critical error for industries like telecom or automotive.</li>
		<li><b><a href="https://www.mongodb.com/products/integrations/kafka-connector">Apache Kafka</a> (data transport layer):</b> Once normalized, the data flows into Apache Kafka, a distributed streaming platform. Kafka ensures reliability, scalability, and low latency, allowing thousands of IoT devices to stream data simultaneously. It acts as a buffer and transport mechanism, ensuring smooth handoff between ingestion and storage.</li>
		<li><b><a href="https://www.mongodb.com/products/platform/atlas-stream-processing">Atlas Stream Processing</a> (real-time processing):</b> Inside the MongoDB ecosystem, the Atlas Stream Processor consumes Kafka topics and enriches the data. Here, additional transformations, filtering, or business logic can be applied—such as tagging sensor IDs, flagging anomalies, or aggregating multiple streams into one coherent dataset.</li>
		<li><b><a href="https://www.mongodb.com/products/platform/atlas-database">MongoDB Atlas</a> (storage layer):</b> Processed records are stored in MongoDB Atlas, which provides a flexible, document-oriented database model. This is especially valuable for IoT, where payloads may vary in structure depending on the device. MongoDB’s time-series collections ensure efficient handling of timestamped sensor readings at scale.</li>
		<li><b><a href="https://www.mongodb.com/products/platform/atlas-charts">Analytics & visualization</a>:</b> Finally, the clean, structured data becomes available for analytics tools like Tableau. Business users and engineers can visualize patterns, track equipment health, or perform predictive maintenance, turning low-level binary signals into actionable business intelligence.</li>
	</font>
</ul>
<p>Endianness may seem like an obscure technicality buried deep inside processors and protocols, but in reality, it is the foundation of digital trust. Without a shared agreement on how bytes are ordered, the vast networks of IoT devices, telecom systems, cars, satellites, and financial platforms would quickly collapse into chaos.</p>
<p>What makes this powerful is not just the correction of byte order, but what happens after. With pipelines that normalize, stream, and store data—like the one combining Endian conversion, Kafka, MongoDB Atlas, and Tableau—raw binary signals are elevated into business-ready insights. A vibration sensor’s byte sequence becomes an early-warning alert for machine failure; a packet header’s alignment ensures 5G base stations stay synchronized; a GPS reading, once correctly interpreted, guides a connected car safely on its route.</p>
<p>In short, endianness is the invisible handshake between machines. When paired with modern data infrastructure, it transforms silent signals into meaningful stories—bridging the gap between the language of bytes and the language of decisions. To learn more, please <a href="https://www.linkedin.com/feed/update/urn:li:activity:7361497416467968000/?originTrackingId=FHpEG867TkmgInQQeGOCpQ%3D%3D" target="_blank">check out the video</a> of the prototype I have created.</p>
<div class="callout">
<p><b>Boost your MongoDB skills by visiting the <a href="https://www.mongodb.com/resources/product/platform/atlas-learning-hub">MongoDB Atlas Learning Hub</a>.</b></p>
</div>	]]></description>
      <pubDate>Thu, 25 Sep 2025 15:00:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/technical/endian-communication-systems-information-exchange-in-bytes</link>
      <guid>https://www.mongodb.com/company/blog/technical/endian-communication-systems-information-exchange-in-bytes</guid>
    </item><item>
      <title>MongoDB SQL Interface: Now Available for Enterprise Advanced </title>
      <description><![CDATA[<p>Today, we’re excited to announce the general availability of <a href="https://www.mongodb.com/products/platform/atlas-sql-interface">MongoDB SQL Interface</a> for <a href="https://www.mongodb.com/try/download/enterprise-advanced">MongoDB Enterprise Advanced</a>. This builds upon the foundation established by MongoDB Atlas SQL Interface, which began by extending SQL connectivity to self-managed MongoDB deployments. Teams can now query their MongoDB data directly from familiar BI tools like Tableau and Microsoft’s Power BI using standard ODBC and Java Database Connectivity (JDBC) connections, eliminating the need to learn MongoDB Query Language (MQL), build extract, transform, and load (ETL) pipelines, or move data.</p>
<h2>Bridging the SQL-MongoDB gap</h2>
<p>Organizations new to MongoDB often face a data access challenge: While developers benefit from increased flexibility and performance, teams moving from SQL-based tools often struggle to access the data they need. Without direct SQL connectivity, they must either learn MongoDB’s query language or build and maintain custom ETL pipelines to move data out of MongoDB for reporting and analytics. This creates fragmented operational reporting workflows, with users switching between multiple tools and data sources to piece together the insights they need. These approaches often lead to increased maintenance overhead, outdated data, and dependency bottlenecks.</p>
<p>MongoDB SQL Interface now eliminates this friction by providing direct SQL access to MongoDB data through custom connectors and drivers. This works by generating comprehensive JSON schemas of MongoDB collections and translating standard SQL queries into MongoDB operations in real time. Users can connect from popular BI tools like Tableau and Power BI, or through JDBC and ODBC drivers for other SQL-based tools. They can use familiar SQL syntax, including joins, aggregations, and subqueries through MongoSQL, a SQL-92 compatible dialect designed specifically for MongoDB. This speeds up analysis and enables self-service reporting while maintaining database performance.</p>
<h2>Getting started</h2>
<p>MongoDB SQL Interface is now included with Enterprise Advanced licenses and works with MongoDB 6.0 or higher, requiring no changes to your existing MongoDB server configuration. The setup process involves three main steps:</p>
<ol>
	<font size="4">
		<li><b>Download the MongoDB SQL Schema Builder CLI</b> from the <a href="https://www.mongodb.com/try/download/sql-interface">download center</a>.</li>
		<li><b>Use the command line interface (CLI)</b> to analyze your data structure and generate schemas that map your collections’ document structures to SQL-queryable formats.</li>
		<li><b>Connect your BI tools</b> using MongoDB’s custom connectors for Tableau and Power BI, or JDBC and ODBC drivers for other SQL-based tools.</li>
	</font>
</ol>
<p>The Schema Builder CLI examines your existing collections to understand document patterns, nested objects, and array structures. It then creates JSON Schema definitions that preserve the full richness of your document model while making complex nested structures and arrays queryable through familiar SQL syntax. This schema-first approach ensures optimal query performance and maintains data type accuracy across your SQL operations.</p>
<p>Once the MongoDB Schema Builder CLI generates your schemas, it stores them alongside your data. SQL Interface then automatically uses them to validate queries and provide proper type information for results. This creates a seamless bridge between MongoDB’s flexible document model and SQL’s structured query expectations.</p>
<h2>Moving forward from MongoDB BI Connector</h2>
<p>For organizations currently using <a href="https://www.mongodb.com/try/download/bi-connector">MongoDB BI Connector</a>, MongoDB SQL Interface represents a significant improvement to our SQL connectivity solution. The interface addresses several limitations of the MongoDB BI Connector approach, including improved query performance through native MongoDB operations and enhanced schema flexibility that better represents document structures. While support for BI Connector will continue until September 2026, MongoDB SQL Interface offers improved performance, enhanced schema control, and a more intuitive setup process.</p>
<div class="callout">
<p><b>Ready to get started with MongoDB SQL Interface for Enterprise Advanced?</b></p>
<ul>
	<font size="4">
		<li><b><a href="https://dochub.mongodb.org/core/sql-schema-builder">Documentation</a>:</b> Complete the implementation guide with configuration options and best practices.</li>
		<li><b><a href="https://www.mongodb.com/try/download/sql-interface">Download center</a>:</b> Get the MongoDB SQL Schema Builder CLI and drivers for your deployment.</li>
		<li><b><a href="https://translators-connectors-releases.s3.amazonaws.com/mongodb-schema-manager/docs/MongoDB_Schema_Manager-overview.pdf" target="_blank">README</a>:</b> Use this guide for quick reference for installation and usage.</li>
		<li><b><a href="https://youtu.be/AUnIs0hlIsE?si=gtCN2-GRU_qrz2mu" target="_blank">Demo video</a>:</b> See MongoDB SQL Interface in action with a step-by-step walkthrough.</li>
	</font>
</ul>
</div>]]></description>
      <pubDate>Thu, 25 Sep 2025 12:30:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/product-release-announcements/mongodb-sql-interface-now-available-enterprise-advanced</link>
      <guid>https://www.mongodb.com/company/blog/product-release-announcements/mongodb-sql-interface-now-available-enterprise-advanced</guid>
    </item><item>
      <title>MongoDB is a Glassdoor Best-Led Company of 2025</title>
      <description><![CDATA[<p>2025 has been a big year for MongoDB.</p>
<p>With MongoDB 8.2, our most feature-rich and performant release yet, we are raising the bar for what developers can achieve. <a href="https://www.mongodb.com/products/platform/ai-search-and-retrieval">Voyage AI</a>’s embedding models and rerankers are bringing state-of-the-art accuracy and efficiency to building trustworthy, reliable AI applications. And we’ve launched the <a href="https://www.mongodb.com/company/blog/product-release-announcements/amp-ai-driven-approach-modernization">MongoDB Application Modernization Platform</a>, or AMP.</p>
<p>Today, MongoDB serves nearly 60,000 organizations across every industry and vertical, including more than 70% of the Fortune 100 and cutting-edge AI-native startups.</p>
<p>On top of these exciting updates, we’re now pleased to announce that MongoDB is among the winners of the annual Glassdoor list of <a href="https://www.glassdoor.com/Award/Best-Led-Companies-LST_KQ0,18.htm" target="_blank">Best-Led Companies</a> in 2025. This list highlights the top 50 companies with more than 1,000 employees whose leaders have been recognized as some of the best. For us, this is not just an external badge of honor—it’s a reflection of the trust and inspiration that MongoDB employees experience every day.</p>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-09-23 at 9.37.18 AM-4kkzbnntzt.png" alt="Image for Glassdoor award winner" title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<p>What makes this award so meaningful is that it’s driven entirely by employee feedback. Unlike other workplace awards, there is no self-nomination or application process. To determine the winners, Glassdoor evaluates <a href="https://www.glassdoor.com/Reviews/index.htm" target="_blank">company reviews</a> shared by current and former employees over the past year. That means every comment, rating, and personal experience shapes the final outcome.</p>
<p>At MongoDB, leadership isn’t limited to titles—it’s how we work. Guided by our <a href="https://www.mongodb.com/company/values">values and Leadership Commitment</a>, we push ourselves to think big, act with ownership, and build trust every day. This recognition is proof that our approach is more than words on a page—it’s shaping a culture where people are inspired to grow, innovate, and win together. Our leaders are not only setting strategy and steering the business; they are building an environment where people feel empowered to take risks, to challenge the status quo, and to achieve more than they thought possible.</p>
<h2>Hear more from our employees</h2>
<p>“I joined MongoDB as the Executive Assistant to our CEO, <a href="https://www.linkedin.com/in/dittycheria/" target="_blank">Dev Ittycheria</a>. During my time working with Dev, I saw firsthand the transparent nature of our leadership team. Though they are focused on the market opportunity in front of us and ensuring MongoDB is set up for long-term success, it does not come at the expense of our people. It was a privilege to support the CEO and work closely with his leadership team who lead with our company values and focus on our people in everything they do.” - <i>Ava Thompson, Executive Support</i></p>
<p>“In my time here, I've been fortunate to see and drive change at the individual level, but also see leadership acknowledge and push innovation at the top, underlining the value we place on continuously improving the way we do things at all levels.” - <i>Charles Shim, FP&amp;A</i></p>
<p>“Every quarter, I have honest conversations with leaders about whether I achieved the goals I set and why. My leaders here help me map out a career path, suggest opportunities I hadn’t considered, and provide feedback on how to align my personal goals with my professional growth. Because of that, I feel I’m growing in every way.” - <i>Jin SEO, Customer Success</i></p>
<p>“MongoDB is a hybrid company. Like many of our engineers, I work outside the company headquarters in New York City. I appreciate MongoDB’s approach to hybrid working and that the company leadership cares about the well-being of their employees. It seems there are companies that don’t seem to trust their employees to make decisions, such as which days to come into the office, so I’m thankful for the autonomy I receive at MongoDB to work in a way that’s best for me.” - <i>Andrew Whitaker, Engineering</i></p>
<p>At MongoDB, we continuously strive to deliver great results for our customers, live our values in everything we do, and demonstrate our leadership principles every day. Because we're not just building next-generation technology – we’re building the next generation of leaders, too.</p>
<div class="callout">
<p><b>Visit our <a href="https://www.mongodb.com/company/careers">careers site</a> to learn more about how you can transform your career at MongoDB.</b></p>
</div>	]]></description>
      <pubDate>Wed, 24 Sep 2025 12:29:03 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/culture/mongodb-is-glassdoor-best-led-company-2025</link>
      <guid>https://www.mongodb.com/company/blog/culture/mongodb-is-glassdoor-best-led-company-2025</guid>
    </item><item>
      <title>Build AI Agents Worth Keeping: The Canvas Framework</title>
      <description><![CDATA[<h2>Why 95% of enterprise AI agent projects fail</h2>
<p>Development teams across enterprises are stuck in the same cycle: They start with &quot;Let's try LangChain&quot; before figuring out what agent to build. They explore CrewAI without defining the use case. They implement RAG before identifying what knowledge the agent actually needs. Months later, they have an impressive technical demo showcasing multi-agent orchestration and tool calling—but can't articulate ROI or explain how it solves actual business needs.</p>
<p>According to McKinsey's latest research, while nearly eight in 10 companies report using generative AI, <a href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/seizing-the-agentic-ai-advantage" target="_blank">fewer than 10% of use cases deployed ever make it past the pilot stage</a>. MIT researchers studying this challenge identified a &quot;<a href="https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf" target="_blank">gen AI divide</a>&quot;—a gap between organizations successfully deploying AI and those stuck in perpetual pilots. In their sample of 52 organizations, researchers found patterns suggesting failure rates as high as 95% (pg.3). Whether the true failure rate is 50% or 95%, the pattern is clear: Organizations lack clear starting points, initiatives stall after pilot phases, and most custom enterprise tools fail to reach production.</p>
<h2>6 critical failures killing your AI agent projects</h2>
<p>The gap between agentic AI's promise and its reality is stark. Understanding these failure patterns is the first step toward building systems that actually work.</p>
<h3>1. The technology-first trap</h3>
<p>MIT's research found that while 60% of organizations evaluated enterprise AI tools, <a href="https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf" target="_blank">only 5% reached production</a> (pg.6)—a clear sign that businesses struggle to move from exploration to execution. Teams rush to implement frameworks before defining business problems. While most organizations have moved beyond ad hoc approaches (<a href="https://newsroom.ibm.com/2025-06-10-IBM-Study-Businesses-View-AI-Agents-as-Essential,-Not-Just-Experimental" target="_blank">down from 19% to 6%</a>, according to IBM), they've replaced chaos with structured complexity that still misses the mark.</p>
<p>Meanwhile, one in four companies taking a true &quot;AI-first&quot; approach—starting with business problems rather than technical capabilities—report transformative results. The difference has less to do with technical sophistication and more about strategic clarity.</p>
<h3>2. The capability reality gap</h3>
<p>Carnegie Mellon's TheAgentCompany benchmark exposed the uncomfortable truth: <a href="https://www.cs.cmu.edu/news/2025/agent-company#:~:text=TheAgentCompany%20shows%20that%20existing%20AI%20agents%20routinely%20failed%20at%20common%20office%20tasks%2C%20providing%20solace%20to%20people%20fearing%20for%20their%20jobs%20and%20giving%20researchers%20a%20way%20to%20assess%20the%20performance%20of%20evolving%20AI%20models." target="_blank">Even our best AI agents would make terrible employees</a>. The best AI model (Claude 3.5 Sonnet) <a href="https://www.cs.cmu.edu/news/2025/agent-company#:~:text=The%20best%20of%20them%2C%20Claude%203.5%20Sonnet%20from%20Anthropic%2C%20only%20completed%2024%25%20of%20the%20tasks.%20Google%27s%20Gemini%202.0%20Flash%20came%20in%20second%20with%2011.4%25%2C%20and%20OpenAI%27s%20GPT%2D4o%20was%20third%20with%208.6%25." target="_blank">completes only 24% of office tasks</a>, with <a href="https://www.cs.cmu.edu/news/2025/agent-company#:~:text=When%20TheAgentCompany%20gave%20partial%20credit%20for%20tasks%20that%20were%20partially%20completed%2C%20it%20only%20boosted%20Claude%20to%2034.4%25%20and%20Qwen%20to%204.2%25." target="_blank">34.4% success when given partial credit</a>. Agents struggle with basic obstacles, such as pop-up windows, which humans navigate instinctively.</p>
<p>More concerning, when faced with challenges, <a href="https://www.cs.cmu.edu/news/2025/agent-company#:~:text=For%20instance%2C%20when%20the%20agent%20couldn%27t%20find%20a%20particular%20person%20it%20needed%20to%20contact%20in%20the%20company%27s%20chat%20platform%2C%20it%20instead%20renamed%20another%20user%2C%20giving%20it%20the%20name%20of%20the%20person%20it%20was%20seeking." target="_blank">some agents resort to deception</a>, like renaming existing users instead of admitting they can't find the right person. These issues demonstrate fundamental reasoning gaps that make autonomous deployment dangerous in real business environments, rather than just technical limitations.</p>
<h3>3. Leadership vacuum</h3>
<p>The disconnect is glaring: <a href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/seizing-the-agentic-ai-advantage#:~:text=In%20fact%2C,9" target="_blank">Fewer than 30% of companies</a> report CEO sponsorship of the AI agenda despite <a href="https://newsroom.ibm.com/2025-06-10-IBM-Study-Businesses-View-AI-Agents-as-Essential,-Not-Just-Experimental#:~:text=With%2070%25%20of%20surveyed%20executives%20indicating%20that%20agentic%20AI%20is%20important%20to%20their%20organization%27s%20future%2C%20the%20research%20suggests%20that%20many%20organizations%20are%20actively%20encouraging%20experimentation.%C2%A0" target="_blank">70% of executives saying agentic AI is important to their future</a>. This leadership vacuum creates cascading failures—AI initiatives fragment into departmental experiments, lack authority to drive organizational change, and can't break through silos to access necessary resources.</p>
<p>Contrast this with Moderna, <a href="https://www.wsj.com/articles/at-moderna-openais-gpts-are-changing-almost-everything-6ff4c4a5#:~:text=CEO%20St%C3%A9phane%20Bancel%20called%20the%20OpenAI%20partnership%2C%20and%20its%20use%20of%20AI%20in%20general%2C%20key%20to%20helping%20the%20vaccine%20maker%20transform%20every%20business%20process%3A%20%E2%80%9CHow%20do%20we%20use%20it%20at%20scale%20to%20reinvent%20all%20of%20Moderna%E2%80%99s%20business%20processes%2C%20in%20science%2C%20in%20legal%2C%20in%20manufacturing%E2%80%94everywhere.%E2%80%9D%C2%A0" target="_blank">where CEO buy-in</a> drove the <a href="https://openai.com/index/moderna/#:~:text=Moderna%20had%20750%20GPTs%20across%20the%20company" target="_blank">deployment of 750+ AI agents</a> and radical restructuring of HR and IT departments. As with the early waves of Big Data, data science, then machine learning adoption, leadership buy-in is the deciding factor for the survival of generative AI initiatives.</p>
<h3>4. Security and governance barriers</h3>
<p>Organizations are paralyzed by a governance paradox: 92% believe governance is essential, <a href="https://www.sailpoint.com/press-releases/sailpoint-ai-agent-adoption-report#:~:text=According%20to%20the%20report%2C%2082%25%20of%20organizations%20already%20use%20AI%20agents%2C%20but%20only%2044%25%20of%20organizations%20report%20having%20policies%20in%20place%20to%20secure%20them" target="_blank">but only 44% have policies</a> (SailPoint, 2025). The result is predictable—80% experienced AI acting outside intended boundaries, with top concerns including privileged data access (60%), unintended actions (58%), and sharing privileged data (57%). Without clear ethical guidelines, audit trails, and compliance frameworks, even successful pilots can't move to production.</p>
<h3>5. Infrastructure chaos</h3>
<p>The infrastructure gap creates a domino effect of failures. While <a href="https://www.sailpoint.com/press-releases/sailpoint-ai-agent-adoption-report#:~:text=According%20to%20the%20report%2C%2082%25%20of%20organizations%20already%20use%20AI%20agents%2C%20but%20only%2044%25%20of%20organizations%20report%20having%20policies%20in%20place%20to%20secure%20them." target="_blank">82% of organizations</a> already use AI agents, <a href="https://newsroom.ibm.com/2025-06-10-IBM-Study-Businesses-View-AI-Agents-as-Essential,-Not-Just-Experimental#:~:text=Those%20surveyed%20indicate%20that%20concerns%20around%20data%20(49%25)%2C%20trust%20issues%20(46%25)%20and%20skills%20shortages%20(42%25)%20remain%20barriers%20to%20adoption%20for%20their%20organizations.%C2%A0" target="_blank">49% cite data concerns</a> as primary adoption barriers (IBM). Data remains fragmented across systems, making it impossible to provide agents with complete context.</p>
<p>Teams end up managing multiple databases—one for operational data, another for vector data and workloads, a third for conversation memory—each with different APIs and scaling characteristics. This complexity kills momentum before agents can actually prove value.</p>
<h3>6. The ROI mirage</h3>
<p>The optimism-reality gap is staggering. <a href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/seizing-the-agentic-ai-advantage#:~:text=More%20than%2080%20percent%20of%20companies%20still%20report%20no%20material%20contribution%20to%20earnings%20from%20their%20gen%20AI%20initiatives." target="_blank">Nearly 80% of companies report no material earnings impact</a> from gen AI (McKinsey), while <a href="https://www.pagerduty.com/newsroom/agentic-ai-survey-2025/#:~:text=Executives%20anticipate%20agentic%20AI%20will%20have%20faster%20adoption%20and%20higher%20ROI%20than%20generative%20AI%2C%20with%2062%25%20expecting%20returns%20above%20100%25" target="_blank">62% expect 100%+ ROI from deployment</a> (PagerDuty). Companies measure activity (number of agents deployed) rather than outcomes (business value created). Without clear success metrics defined upfront, even successful implementations look like expensive experiments.</p>
<h2>The AI development paradigm shift: from data-first to product-first</h2>
<p>There's been a fundamental shift in how successful teams approach agentic AI development, and it mirrors what <a href="https://www.swyx.io/about" target="_blank">Shawn Wang (Swyx)</a> observed in his influential &quot;<a href="https://www.latent.space/p/ai-engineer#:~:text=Fire%2C%20ready%2C%20aim,data%20to%20finetune." target="_blank">Rise of the AI Engineer</a>&quot; post about the broader generative AI space.</p>
<h3>The old way: data → model → product</h3>
<p>In the traditional paradigm practiced during the early years of machine learning, teams would spend months architecting datasets, labeling training data, and preparing for model pre-training. Only after training custom models from scratch could they finally incorporate these into product features.</p>
<p>The trade-offs were severe: massive upfront investment, long development cycles, high computational costs, and brittle models with narrow capabilities. This sequential process created high barriers to entry—only organizations with substantial ML expertise and resources could deploy AI features.</p>
<center><caption><b>Figure 1.</b> The Data → Model → Product Lifecycle.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-09-23 at 7.35.29 AM-8vy6t7f31z.png" alt="This diagram has the title traditional flow: pre-foundation model era. On the left center of this diagram is a box for data, which takes 1-6 months and is for architect datasets, clean & label, and prepare training. This box connects to a box for model through a line titled heavy investment. The model box takes 7-9 months and is for train from scratch, high compute costs, and narrow capabilities. This box then connects to a box titled product through a line titled single purpose. The product box takes 10+ months and has the descriptors of finally integrate, limited features, and brittle deployment. At the bottom is a box that lists the challenges of this approach, such as 10+ months to first value & signal, massive upfront investment, brittle, single-purpose models, and long cycles, limited value, high infrastructure requirements." title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"><center>Traditional AI development required months of data preparation and model training before shipping products.</center> </figcaption>
</figure>
<h3>The new way: product → data → model</h3>
<p>The emergence of foundation models changed everything.</p>
<center><caption><b>Figure 2.</b> The Product → Data → Model Lifecycle.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-09-23 at 7.41.38 AM-uzjhlrru4d.png" alt="The title of this diagram is modern flow: foundation model era. This model begins on the left with product which is associated with week 1 and is for defining user need, building an MVP, and fast iteration. This then leads to data via fast experimentation. Data is associated with week 2 and is for identifying needed knowledge, collecting examples, and structuring for retrieval. Data then connects to model via immediate capability. Model occurs over week 3+ and is for selecting providers, optimizing prompts, and testing performance. The box at the bottom lists the benefits of this approach, which includes days to first value & signal, easy model swapping, data requirements drive model choice, and product hypotheses can be tested with near immediate feedback." title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"><center>Foundation model APIs flipped the traditional cycle, enabling rapid experimentation before data and model optimization.</center> </figcaption>
</figure>
<p>Powerful LLMs became commoditized through providers like OpenAI and Anthropic. Now, teams could:</p>
<ol>
	<font size="4">
		<li>Start with the product vision and customer need.</li>
		<li>Identify what data would enhance it (examples, knowledge bases, RAG content).</li>
		<li>Select the appropriate model that could process that data effectively.</li>
	</font>
</ol>	
<p>This enabled zero-shot and few-shot capabilities via simple API calls. Teams could build MVPs in days, define their data requirements based on actual use cases, then select and swap models based on performance needs. Developers now ship experiments quickly, gather insights to improve data (for RAG and evaluation), then fine-tune only when necessary. This democratized cutting-edge AI to all developers, not just those with specialized ML backgrounds.</p>
<h3>The agentic evolution: product → agent → data → model</h3>
<p>But for agentic systems, there's an even more important insight: Agent design sits between product and data.</p>
<center><caption><b>Figure 3.</b> The Product → Agent → Data → Model Lifecycle.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-09-23 at 7.59.33 AM-yauwdcos5r.png" alt="This diagram is titled agentic flow: foundation model era. This diagram begins on the left with product where you define the problem. This connects to agent via user-first design, and the agent is for design behavior. Agent then goes to data via determines requirements, and data is for enhancing performance. Data connects to model via match to agent needs, and the model step is for select provider. The new considerations of this are that the agent layer orchestrates everything, tools & workflows before model selection, and data enhances, doesn't enable. " title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"><center>Agent design now sits between product and data, determining downstream requirements for knowledge, tools, and model selection.</center> </figcaption>
</figure>
<p>Now, teams follow this progression:</p>
<ol>
	<font size="4">
		<li><b>Product:</b> Define the user problem and success metrics.</li>
		<li><b>Agent:</b> Design agent capabilities, workflows, and behaviors.</li>
		<li><b>Data:</b> Determine what knowledge, examples, and context the agent needs.</li>
		<li><b>Model:</b> Select external providers and optimize prompts for your data.</li>
	</font>
</ol>
<p>With external model providers, the &quot;model&quot; phase is really about selection and integration rather than deployment. Teams choose which provider's models best handle their data and use case, then build the orchestration layer to manage API calls, handle failures, and optimize costs.</p>
<p>The agent layer shapes everything downstream—determining what data is needed (knowledge bases, examples, feedback loops), what tools are required (search, calculation, code execution), and ultimately, which external models can execute the design effectively.</p>
<p>This evolution means teams can start with a clear user problem, design an agent to solve it, identify necessary data, and then select appropriate models—rather than starting with data and hoping to find a use case. This is why the canvas framework follows this exact flow.</p>
<h2>The canvas framework: A systematic approach to building AI agents</h2>
<p>Rather than jumping straight into technical implementation, successful teams use structured planning frameworks. Think of them as &quot;business model canvases for AI agents&quot;—tools that help teams think through critical decisions in the right order.</p>
<p>Two complementary frameworks directly address the common failure patterns:</p>
<center><caption><b>Figure 4.</b> The Agentic AI Canvas Framework.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-09-23 at 8.07.20 AM-4pc5pxkwhd.png" alt="This diagram is titled agent AI canvas framework: From idea to production. This process goes from business problem, to POC Canvas, to prototype & launch, then to production canvas, and finally production agent. " title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"><center>A structured five-phase approach moving from business problem definition through POC, prototype, production canvas, and production agent deployment. Please see the “Resources” section at the end for links to the corresponding templates, hosted in the gen AI Showcase.</b></figcaption>
</figure>
<h3>Canvas #1 - The POC canvas for validating your agent idea</h3>
<p>The POC canvas implements the product → agent → data → model flow through eight focused squares designed for rapid validation:</p>
<center><caption><b>Figure 5.</b> The Agent POC Canvas V1.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-09-23 at 8.11.06 AM-0gs75a3btz.png" alt="This table is titled agent POC: Canvas 1. The description at the top of the table says the canvas helps teams systematically work through all aspects of an agentic AI project while avoiding redundancy and ensuring nothing critical is missed." title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"><center>Eight focused squares implementing the product → agent → data → model flow for rapid validation of AI agent concepts.</center></figcaption>
</figure>
<h4>Phase 1: Product validation—who needs this and why?</h4>
<p>Before building anything, you must validate that a real problem exists and that users actually want an AI agent solution. This phase prevents the common mistake of building impressive technology that nobody needs. If you can't clearly articulate who will use this and why they'll prefer it to current methods, stop here.</p>
<html>
<head>
     <style>
    table,
    th,
    td {
        border: 1px solid black;
        border-collapse: collapse;
    }
    th,
    td {
        padding: 5px;
    }
    </style>
</head>
<body>
    <table style="width:100%">
        <tr>
					<th>Square</th>  
					<th>Purpose</th>
            <th>Key Questions</th>
        </tr>
        <tr>
					<td>Product vision & user problem</td>
					<td>Define the business problem and establish why an agent is the right solution.</td>
					<td>
						<ul>
							<li><b>Core problem:</b> What specific workflow frustrates users today?</li><li><b>Target users:</b> Who experiences this pain and how often?</li>
							<li><b>Success vision:</b> What would success look like for users?</li>
							<li><b>Value hypothesis:</b> Why would users prefer an agent to current solutions?</li>
									</ul>
								</td>
        </tr>
			<tr>
				<td>User validation & interaction</td>
				<td>User Validation & Interaction
Map how users will engage with the agent and identify adoption barriers.</td>
				<td>
					<ul>
						<li><b>User journey:</b> What's the complete interaction from start to finish?</li>
						<li><b>Interface preference:</b> How do users want to interact?</li>
						<li><b>Feedback mechanisms:</b> How will you know it's working?</li>
						<li><b>Adoption barriers:</b> What might prevent users from trying it?</li>
					</ul>
				</td>
			</tr>
    </table>
</body>
</html>
<h4>Phase 2: Agent design—what will it do and how?</h4>
<p>With a validated problem, design the agent's capabilities and behavior to solve that specific need. This phase defines the agent's boundaries, decision-making logic, and interaction style before any technical implementation. The agent design directly determines what data and models you'll need, making this the critical bridge between problem and solution.</p>
<html>
<head>
     <style>
    table,
    th,
    td {
        border: 1px solid black;
        border-collapse: collapse;
    }
    th,
    td {
        padding: 5px;
    }
    </style>
</head>
<body>
    <table style="width:100%">
        <tr>
					<th>Square</th>  
					<th>Purpose</th>
            <th>Key Questions</th>
        </tr>
        <tr>
					<td>Agent capabilities & workflow</td>
					<td>Agent Capabilities & Workflow
Design what the agent must do to solve the identified problem.
</td>
					<td>
						<ul>
							<li><b>Core tasks:</b> What specific actions must the agent perform?</li>
							<li><b>Decision logic:</b> How should complex requests be broken down?</li>
							<li><b>Tool requirements:</b> What capabilities does the agent need?</li>
							<li><b>Autonomy boundaries:</b> What can it decide versus escalate?</li>
						</ul>
								</td>
        </tr>
			<tr>
				<td>Agent interaction & memory</td>
				<td>Agent Interaction & Memory
Establish communication style and context management.</td>
				<td>
					<ul>
						<li><b>Conversation flow:</b> How should the agent guide interactions?</li>
						<li><b>Personality and tone:</b> What style fits the use case?</li>
						<li><b>Memory requirements:</b> What context must persist?</li>
						<li><b>Error handling:</b> How should confusion be managed?</li>
					</ul>
				</td>
			</tr>
    </table>
</body>
</html>
<h4>Phase 3: Data requirements—what knowledge does it need?</h4>
<p>Agents are only as good as their knowledge base, so identify exactly what information the agent needs to complete its tasks. This phase maps existing data sources and gaps before selecting models, ensuring you don't choose technology that can't handle your data reality. Understanding data requirements upfront prevents the costly mistake of selecting models that can't work with your actual information.</p>
<html>
<head>
     <style>
    table,
    th,
    td {
        border: 1px solid black;
        border-collapse: collapse;
    }
    th,
    td {
        padding: 5px;
    }
    </style>
</head>
<body>
    <table style="width:100%">
        <tr>
					<th>Square</th>  
					<th>Purpose</th>
            <th>Key Questions</th>
        </tr>
        <tr>
					<td>Knowledge requirements & sources</td>
					<td>Identify essential information and where to find it.</td>
					<td>
						<ul>
							<li><b>Essential knowledge:</b> What information must the agent have to complete tasks?</li>
							<li><b>Data sources:</b> Where does this knowledge currently exist?</li>
							<li><b>Update frequency:</b> How often does this information change?</li>
							<li><b>Quality requirements:</b> What accuracy level is needed?</li>
						</ul>
								</td>
        </tr>
			<tr>
				<td>Data collection & enhancement strategy</td>
				<td>Plan data gathering and continuous improvement.</td>
				<td>
					<ul>
						<li><b>Collection strategy:</b> How will initial data be gathered?</li>
						<li><b>Enhancement priority:</b> What data has the biggest impact?</li>
						<li><b>Feedback loops:</b> How will interactions improve the data?</li>
						<li><b>Integration method:</b> How will data be ingested and updated?</li>
					</ul>
				</td>
			</tr>
    </table>
</body>
</html>
<h4>Phase 4: External model integration—which provider and how?</h4>
<p>Only after defining data needs should you select external model providers and build the integration layer. This phase tests whether available models can handle your specific data and use case while staying within budget. The focus is on prompt engineering and API orchestration rather than model deployment, reflecting how modern AI agents actually get built.</p>
<html>
<head>
     <style>
    table,
    th,
    td {
        border: 1px solid black;
        border-collapse: collapse;
    }
    th,
    td {
        padding: 5px;
    }
    </style>
</head>
<body>
    <table style="width:100%">
        <tr>
					<th>Square</th>  
					<th>Purpose</th>
            <th>Key Questions</th>
        </tr>
        <tr>
					<td>Provider selection & prompt engineering</td>
					<td>Choose external models and optimize for your use case.</td>
					<td>
						<ul>
							<li><b>Provider evaluation:</b> Which models handle your requirements best?</li>
							<li><b>Prompt strategy:</b> How should you structure requests for optimal results?</li>
							<li><b>Context management:</b> How should you work within token limits?</li>
							<li><b>Cost validation:</b> Is this economically viable at scale?</li>
							</ul>
								</td>
        </tr>
			<tr>
				<td>API integration & validation</td>
				<td>Build orchestration and validate performance.</td>
				<td>
					<ul>
						<li><b>Integration architecture:</b> How do you connect to providers?</li>
						<li><b>Response processing:</b> How do you handle outputs?</li>
						<li><b>Performance testing:</b> Does it meet requirements?</li>
						<li><b>Production readiness:</b> What needs hardening?</li>
					</ul>
				</td>
			</tr>
    </table>
</body>
</html>
<br>
<center><caption><b>Figure 6.</b> The Agent POC Canvas V1 (Detailed).</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-09-23 at 8.35.15 AM-ubzvmbi1nu.png" alt="Table diagram titled Agent POC: Canvas V1 - detailed. The description for the table says the canvas helps teams systematically work through all aspects of an agentic AI project while avoiding redundancy and ensuring nothing critical is missed." title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"><center>Expanded view with specific guidance for each of the eight squares covering product validation, agent design, data requirements, and external model integration.</center></figcaption>
</figure>
<h3>Unified data architecture: solving the infrastructure chaos</h3>
<p>Remember the infrastructure problem—teams managing three separate databases with different APIs and scaling characteristics? This is where a unified data platform becomes critical.</p>
<p>Agents need three types of data storage:</p>
<ul>
	<font size="4">
	<li><b>Application database:</b> For business data, user profiles, and transaction history</li>
	<li><b>Vector store:</b> For semantic search, knowledge retrieval, and RAG</li>
	<li><b>Memory store:</b> For agent context, conversation history, and learned behaviors</li>
	</font>
</ul>
<p>Instead of juggling multiple systems, teams can use a unified platform like MongoDB Atlas that provides all three capabilities—flexible document storage for application data, native vector search for semantic retrieval, and rich querying for memory management—all in a single platform.</p>
<p>This unified approach means teams can focus on prompt engineering and orchestration rather than model infrastructure, while maintaining the flexibility to evolve their data model as requirements become clearer. The data platform handles the complexity while you optimize how external models interact with your knowledge.</p>
<p>For embeddings and search relevance, specialized models like Voyage AI can provide domain-specific understanding, particularly for technical documentation where general-purpose embeddings fall short. The combination of unified data architecture with specialized embedding models addresses the infrastructure chaos that kills projects.</p>
<p>This unified approach means teams can focus on agent logic rather than database management, while maintaining the flexibility to evolve their data model as requirements become clearer.</p>
<h3>Canvas #2 - The production canvas for scaling your validated AI agent</h3>
<p>When a POC succeeds, the production canvas guides the transition from &quot;it works&quot; to &quot;it works at scale&quot; through 11 squares organized following the same product → agent → data → model flow, with additional operational concerns:</p>
<center><caption><b>Figure 7.</b> The Productionize Agent Canvas V1.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-09-23 at 8.41.31 AM-os2mjfdrcz.png" alt="Table diagram titled productionize agent: Canvas V1. The description is this canvas guides enterprise teams through the complete journey from validated POC to production-ready agentic systems, addressing technical architecture, business requirements, and operational excellence." title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"><center>Eleven squares guiding the transition from validated POC to production-ready systems, addressing scale, architecture, operations, and governance.</center> </figcaption>
</figure>
<h4>Phase 1: Product and scale planning</h4>
<p>Transform POC learnings into concrete business metrics and scale requirements for production deployment. This phase establishes the economic case for investment and defines what success looks like at scale. Without clear KPIs and growth projections, production systems become expensive experiments rather than business assets.</p>
<html>
<head>
     <style>
    table,
    th,
    td {
        border: 1px solid black;
        border-collapse: collapse;
    }
    th,
    td {
        padding: 5px;
    }
    </style>
</head>
<body>
    <table style="width:100%">
        <tr>
					<th>Square</th>  
					<th>Purpose</th>
            <th>Key Questions</th>
        </tr>
        <tr>
					<td>Business case & scale planning</td>
					<td>Translate POC validation into production metrics.</td>
					<td>
						<ul>
							<li><b>Proven value:<b> What did the POC validate?</li>
								<li><b>Business KPIs:</b> What metrics measure ongoing success?</li>
								<li><b>Scale requirements:</b> How many users and interactions?</li>
								<li><b>Growth strategy:</b> How will usage expand over time?</li>
							</ul>
								</td>
        </tr>
			<tr>
				<td>Production requirements & constraints</td>
				<td>Define performance standards and operational boundaries.</td>
				<td>
					<ul>
						<li><b>Performance standards:</b> Response time, availability, throughput?</li>
						<li><b>Reliability requirements:</b> Recovery time and failover?</li>
						<li><b>Budget constraints:</b> Cost limits and optimization targets?</li>
						<li><b>Security needs:</b> Compliance and data protection requirements?</li>
					</ul>
				</td>
			</tr>
    </table>
</body>
</html>
<h4>Phase 2: Agent architecture</h4>
<p>Design robust systems that handle complex workflows, multiple agents, and inevitable failures without disrupting users. This phase addresses the orchestration and fault tolerance that POCs ignore but production demands. The architecture decisions here determine whether your agent can scale from 10 users to 10,000 without breaking.</p>
<html>
<head>
     <style>
    table,
    th,
    td {
        border: 1px solid black;
        border-collapse: collapse;
    }
    th,
    td {
        padding: 5px;
    }
    </style>
</head>
<body>
    <table style="width:100%">
        <tr>
					<th>Square</th>  
					<th>Purpose</th>
            <th>Key Questions</th>
        </tr>
        <tr>
					<td>Robust agent architecture</td>
					<td>Design for complex workflows and fault tolerance.</td>
					<td>
						<ul>
							<li><b>Workflow orchestration:</b> How do you manage multi-step processes?</li>
							<li><b>Multi-agent coordination:</b> How do specialized agents collaborate?</li>
							<li><b>Fault tolerance:</b> How do you handle failures gracefully?</li>
							<li><b>Update rollouts:</b> How do you update without disruption?</li>
							</ul>
								</td>
        </tr>
			<tr>
				<td>Production memory & context systems</td>
				<td>Implement scalable context management.</td>
				<td>
					<ul>
						<li><b>Memory architecture:</b> Session, long-term, and organizational knowledge?</li>
						<li><b>Context persistence:</b> Storage and retrieval strategies?</li>
						<li><b>Cross-session continuity:</b> How do you maintain user context?</li>
						<li><b>Memory lifecycle management:</b> Retention, archival, and cleanup?</li>
					</ul>
				</td>
			</tr>
    </table>
</body>
</html>
<h4>Phase 3: Data infrastructure</h4>
<p>Build the data foundation that unifies application data, vector storage, and agent memory in a manageable platform. This phase solves the &quot;three database problem&quot; that kills production deployments through complexity. A unified data architecture reduces operational overhead while enabling the sophisticated retrieval and context management that production agents require.</p>
<html>
<head>
     <style>
    table,
    th,
    td {
        border: 1px solid black;
        border-collapse: collapse;
    }
    th,
    td {
        padding: 5px;
    }
    </style>
</head>
<body>
    <table style="width:100%">
        <tr>
					<th>Square</th>  
					<th>Purpose</th>
            <th>Key Questions</th>
        </tr>
        <tr>
					<td>Data architecture & management</td>
					<td>Build a unified platform for all data types.</td>
					<td>
						<ul>
							<li><b>Platform architecture:</b> Application, vector, and memory data?</li>
							<li><b>Data pipelines:</b> Ingestion, processing, and updates?</li>
							<li><b>Quality assurance:</b> Validation and freshness monitoring?</li>
							<li><b>Knowledge governance:</b> Version control and approval workflows?</li>
							</ul>
								</td>
        </tr>
			<tr>
				<td>Knowledge base & pipeline operations</td>
				<td>Maintain and optimize knowledge systems.</td>
				<td>
					<ul>
						<li><b>Update strategy:</b> How does knowledge evolve?</li>
						<li><b>Embedding approach:</b> Which models for which content?</li>
						<li><b>Retrieval optimization:</b> Search relevance and reranking?</li>
						<li><b>Operational monitoring:</b> Pipeline health and costs?</li>
					</ul>
				</td>
			</tr>
    </table>
</body>
</html>
<h4>Phase 4: Model operations</h4>
<p>Implement strategies for managing multiple model providers, fine-tuning, and cost optimization at production scale. This phase covers API management, performance monitoring, and the continuous improvement pipeline for model performance. The focus is on orchestrating external models efficiently rather than deploying your own, including when and how to fine-tune.</p>
<html>
<head>
     <style>
    table,
    th,
    td {
        border: 1px solid black;
        border-collapse: collapse;
    }
    th,
    td {
        padding: 5px;
    }
    </style>
</head>
<body>
    <table style="width:100%">
        <tr>
					<th>Square</th>  
					<th>Purpose</th>
            <th>Key Questions</th>
        </tr>
        <tr>
					<td>Model strategy & optimization</td>
					<td>Manage providers and fine-tuning strategies.</td>
					<td>
						<ul>
							<li><b>Provider selection:</b> Which models for which tasks?</li>
							<li><b>Fine-tuning approach:</b> When and how to customize?</li>
							<li><b>Routing logic:</b> Base versus fine-tuned model decisions?</li>
							<li><b>Cost controls:</b> Caching and intelligent routing?</li>
							</ul>
								</td>
        </tr>
			<tr>
				<td>API management & monitoring</td>
				<td>Handle external APIs and performance tracking.</td>
				<td>
					<ul>
						<li><b>API configuration:</b> Key management and failover?</li>
						<li><b>Performance Tracking:</b> Accuracy, latency, and costs?</li>
						<li><b>Fine-tuning pipeline:</b> Data collection for improvement?</li>
						<li><b>Version control:</b> A/B testing and rollback strategies?</li>
					</ul>
				</td>
			</tr>
    </table>
</body>
</html>
<h4>Phase 5: Hardening and operations</h4>
<p>Add the security, compliance, user experience, and governance layers that transform a working system into an enterprise-grade solution. This phase addresses the non-functional requirements that POCs skip but enterprises demand. Without proper hardening, even the best agents remain stuck in pilot purgatory due to security or compliance concerns.</p>
<html>
<head>
     <style>
    table,
    th,
    td {
        border: 1px solid black;
        border-collapse: collapse;
    }
    th,
    td {
        padding: 5px;
    }
    </style>
</head>
<body>
    <table style="width:100%">
        <tr>
					<th>Square</th>  
					<th>Purpose</th>
            <th>Key Questions</th>
        </tr>
        <tr>
					<td>Security & compliance</td>
					<td>Implement enterprise security and regulatory controls.</td>
					<td>
						<ul>
							<li><b>Security implementation:</b> Authentication, encryption, and access management?</li>
							<li><b>Access control:</b> User and system access management?</li>
							<li><b>Compliance framework:</b> Which regulations apply?</li>
							<li><b>Audit capabilities:</b> Logging and retention requirements?</li>
							</ul>
								</td>
        </tr>
			<tr>
				<td>User experience & adoption</td>
				<td>Drive usage and gather feedback.</td>
				<td>
					<ul>
						<li><b>Workflow integration:</b> How do you fit existing processes?</li>
						<li><b>Adoption strategy:</b> Rollout and engagement plans?</li>
						<li><b>Support systems:</b> Documentation and help channels?</li>
						<li><b>Feedback integration:</b> How does user input drive improvement?</li>
					</ul>
				</td>
			</tr>
			<tr>
				<td>Continuous improvement & governance</td>
				<td>Ensure long-term sustainability.</td>
				<td>
					<ul>
						<li><b>Operational procedures:</b> Maintenance and release cycles?</li>
						<li><b>Quality gates:</b> Testing and deployment standards?</li>
						<li><b>Cost management:</b> Budget monitoring and optimization?</li>
						<li><b>Continuity planning:</b> Documentation and team training?</li>
					</ul>
			</tr>
    </table>
</body>
</html>
<br>
<center><caption><b>Figure 8.</b> The Productionize Agent Canvas V1 (Detailed).</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-09-23 at 9.10.21 AM-hczx63dr6t.png" alt="Table diagram titled productionize agent: Canvas V1 - Detailed. The description is this canvas guides enterprise teams through the complete journey from validated POC to production-ready agentic systems, addressing technical architecture, business requirements, and operational excellence." title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"><center>Expanded view with specific guidance for each of the eleven squares covering scale planning, architecture, data infrastructure, model operations, and hardening requirements.</center> </figcaption>
</figure>
<h2>Next steps: start building AI agents that deliver ROI</h2>
<p>MIT's research found that <a href="https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf" target="_blank">66% of executives want systems that learn from feedback</a>, while 63% demand context retention (pg.14). The dividing line between AI and human preference is memory, adaptability, and learning capability.</p>
<p>The canvas framework directly addresses the failure patterns plaguing most projects by forcing teams to answer critical questions in the right order—following the product → agent → data → model flow that successful teams have discovered.</p>
<p>For your next agentic AI initiative:</p>
<ul>
	<font size="4">
		<li>Start with the POC canvas to validate concepts quickly.</li>
		<li>Focus on user problems before technical solutions.</li>
		<li>Leverage AI tools to rapidly prototype after completing your canvas.</li>
		<li>Only scale what users actually want with the production canvas.</li>
		<li>Choose a unified data architecture to reduce complexity from day one.</li>
	</font>
</ul>	
<p>Remember: The goal isn't to build the most sophisticated agent possible—it's to build agents that solve real problems for real users in production environments.</p>
<div class="callout">
<p><b>For hands-on guidance on memory management, check out our <a href="https://www.youtube.com/watch?v=n-slj72yx8w" target="_blank">webinar</a> on YouTube, which covers essential concepts and proven techniques for building memory-augmented agents.</b></p>
<p><b>Head over to the <a href="https://www.mongodb.com/resources/use-cases/artificial-intelligence?tck=augmented_ai_agents_blog">MongoDB AI Learning Hub</a> to learn how to build and deploy AI applications with MongoDB.</b></p>
</div>
<h3>Resources</h3>
<ul>
	<font size="4">
		<li><a href="https://github.com/mongodb-developer/GenAI-Showcase/blob/main/resources/agent-canvas/MongoDB-agentic-poc-canvas.pdf" target="_blank">Download POC Canvas Template</a> (PDF)</li>
		<li><a href="https://github.com/mongodb-developer/GenAI-Showcase/blob/main/resources/agent-canvas/MongoDB-agent-productionization-canvas.pdf" target="_blank">Download Production Canvas Template</a> (PDF)</li>
<li><a href="https://github.com/mongodb-developer/GenAI-Showcase/blob/main/resources/agent-canvas/MongoDB-combined-agentic-AI-planning-canvas.xlsx" target="_blank">Download Combined POC + Production Canvas</a> (Excel) - Get both canvases in a single excel file, with example prompts and blank templates.</li>
	</font>
</ul>
<h3>Full reference list</h3>
<ol>
	<font size="4">
		<li><b>McKinsey & Company</b>. (2025). "Seizing the agentic AI advantage." <a href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/seizing-the-agentic-ai-advantage" target="_blank">ttps://www.mckinsey.com/capabilities/quantumblack/our-insights/seizing-the-agentic-ai-advantage</a></li>
		<li><b>MIT NANDA</b>. (2025). "The GenAI Divide: State of AI in Business 2025." <a href="https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf" target="_blank">Report</a></li> 
		<li><b>Gartner</b>. (2025). "Gartner Predicts Over 40% of Agentic AI Projects Will Be Canceled by End of 2027." <a href="https://www.gartner.com/en/newsroom/press-releases/2025-06-25-gartner-predicts-over-40-percent-of-agentic-ai-projects-will-be-canceled-by-end-of-2027" target="_blank">https://www.gartner.com/en/newsroom/press-releases/2025-06-25-gartner-predicts-over-40-percent-of-agentic-ai-projects-will-be-canceled-by-end-of-2027</a></li>
		<li><b>IBM</b>. (2025). "IBM Study: Businesses View AI Agents as Essential, Not Just Experimental." <a href="https://newsroom.ibm.com/2025-06-10-IBM-Study-Businesses-View-AI-Agents-as-Essential,-Not-Just-Experimental" target="_blank">https://newsroom.ibm.com/2025-06-10-IBM-Study-Businesses-View-AI-Agents-as-Essential,-Not-Just-Experimental</a></li>
		<li><b>Carnegie Mellon University</b>. (2025). "TheAgentCompany: Benchmarking LLM Agents." <a href="https://www.cs.cmu.edu/news/2025/agent-company" target="_blank">https://www.cs.cmu.edu/news/2025/agent-company</a></li>
		<li><b>Swyx</b>. (2023). "The Rise of the AI Engineer." Latent Space. <a href="https://www.latent.space/p/ai-engineer" target="_blank">https://www.latent.space/p/ai-engineer</a></li>
		<li><b>SailPoint</b>. (2025). "SailPoint research highlights rapid AI agent adoption, driving urgent need for evolved security." <a href="https://www.sailpoint.com/press-releases/sailpoint-ai-agent-adoption-report" target="_blank">https://www.sailpoint.com/press-releases/sailpoint-ai-agent-adoption-report</a></li>
		<li><b>SS&C Blue Prism</b>. (2025). "Generative AI Statistics 2025." <a href="https://www.blueprism.com/resources/blog/generative-ai-statistics-2025/" target="_blank">https://www.blueprism.com/resources/blog/generative-ai-statistics-2025/</a></li>
		<li><b>PagerDuty</b>. (2025). "State of Digital Operations Report." <a href="https://www.pagerduty.com/newsroom/2025-state-of-digital-operations-study/" target="_blank">https://www.pagerduty.com/newsroom/2025-state-of-digital-operations-study/</a></li>
		<li><b>Wall Street Journal</b>. (2024). "How Moderna Is Using AI to Reinvent Itself." <a href="https://www.wsj.com/articles/at-moderna-openais-gpts-are-changing-almost-everything-6ff4c4a5" target="_blank">https://www.wsj.com/articles/at-moderna-openais-gpts-are-changing-almost-everything-6ff4c4a5</a></li>
	</font>
</ol>	]]></description>
      <pubDate>Tue, 23 Sep 2025 16:00:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/technical/build-ai-agents-worth-keeping-canvas-framework</link>
      <guid>https://www.mongodb.com/company/blog/technical/build-ai-agents-worth-keeping-canvas-framework</guid>
    </item><item>
      <title>Simplify AI-Driven Data Connectivity With MongoDB and MCP Toolbox</title>
      <description><![CDATA[<p>The wave of generative AI applications is revolutionizing how businesses interact with and derive value from their data. Organizations need solutions that simplify these interactions and ensure compatibility with an expanding ecosystem of databases. Enter <a href="https://github.com/googleapis/genai-toolbox/blob/main/README.md" target="_blank">MCP Toolbox for Databases</a>, an open-source Model Context Protocol (MCP) server that enables seamless integration between gen AI agents and enterprise data sources using a standardized protocol pioneered by Anthropic. With the built-in capability to query multiple data sources simultaneously and unify results, MCP Toolbox eliminates fragmented integration challenges, empowering businesses to unlock the full potential of their data.</p>
<p>With <a href="https://www.mongodb.com/products/platform">MongoDB Atlas</a> now joining the ecosystem of databases supported by MCP Toolbox, enterprises using MongoDB’s industry-leading cloud-native database platform can benefit from streamlined connections to their gen AI systems.</p>
<p>As businesses adopt gen AI to unlock insights and automate workflows, the choice of database is critical to meeting demands for dynamic data structures, scalability, and high-performance applications. MongoDB Atlas, with its fully managed, document-oriented NoSQL design and capabilities for flexible schema modeling, is the ultimate companion to MCP Toolbox for applications requiring unstructured or semistructured data connectivity.</p>
<p>This blog post explores how MongoDB Atlas integrates into MCP Toolbox, its advantages for developers, and the key use cases for enabling AI-driven data solutions in enterprise environments.</p>
<center><caption><b>Figure 1.</b> MongoDB as a source for MCP Toolbox for Databases.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-09-19 at 2.04.49 PM-dg7ezporjy.png" alt="This diagram has the agents for application and the agents for developer assistance both connecting to the MCP toolbox for databases, which then connects to MongoDB Atlas." title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<h2>How it works</h2>
<p>The integration of MongoDB Atlas with MCP Toolbox enables users to perform Create, Read, Update, Delete (CRUD) operations on MongoDB data sources using the standardized MCP. Beyond fundamental data management tasks, this integration also unlocks capabilities from MongoDB’s <a href="https://www.mongodb.com/docs/manual/aggregation/">aggregation framework</a>, enabling users to seamlessly execute complex data transformations, computations, and analyses. This empowers businesses to not only access and modify their data but also uncover valuable insights by harnessing MongoDB’s powerful query functionality within workflows driven by MCP Toolbox. By combining the scalability and flexibility of MongoDB Atlas with MCP Toolbox’s ability to query across multiple data sources, organizations can develop advanced AI-driven applications, enhance operational efficiency, and uncover deeper analytical opportunities.</p>
<p>The use of MongoDB as both a source and a sink within MCP Toolbox is simple and highly versatile, thanks to the flexibility of the configuration file. To configure MongoDB as a data source, you can define it under the sources section, specifying parameters such as its kind (&quot;mongodb&quot;) and the connection’s Uniform Resource Identifier (URI) to establish access to your MongoDB instance.</p>
<pre><code tabindex="0">sources:&NewLine;    my-mongodb:&NewLine;        kind: mongodb&NewLine;        uri: &quot;mongodb+srv://username:password@host.mongodb.net&quot;&NewLine;</code></pre>
<p>In the tools section, various operations—such as retrieving, updating, inserting, or deleting data—can be defined by linking the appropriate source, specifying the target database and dataset, and configuring parameters such as filters, projections, sorting, or payload structures. Additionally, databases can act as sinks for storing data by enabling operations to write new records or modify existing ones, making them ideal for workflows where applications or systems need to interact dynamically with persistent storage. The toolsets section facilitates grouping related tools, making it easy to load and manage specific sets of operations based on different use cases or requirements. Whether used for reading or writing data, the integration of databases via MCP Toolbox provides a streamlined and consistent approach to managing and interacting with diverse data sources. Below is an example of running &quot;find query&quot; on MongoDB Atlas using the MCP Toolbox.</p>
<pre><code tabindex="0">tools:&NewLine;  get_user_profile:&NewLine;    kind: mongodb-find-one&NewLine;    source: my-mongo-source&NewLine;    description: Retrieves a user's profile by their email address.&NewLine;    database: user_data&NewLine;    collection: profiles&NewLine;    filterPayload: |&NewLine;        { &quot;email&quot;: {{json .email}} }&NewLine;    filterParams:&NewLine;      - name: email&NewLine;        type: string&NewLine;        description: The email address of the user to find.&NewLine;    projectPayload: |&NewLine;        { &NewLine;          &quot;password_hash&quot;: 0,&NewLine;          &quot;login_history&quot;: 0&NewLine;        }&NewLine;</code></pre>
<h2>Getting started</h2>
<p>The integration of MongoDB Atlas and MCP Toolbox for Databases marks a significant step forward in simplifying database interactions for enterprises embracing gen AI. By enabling seamless connectivity, advanced data operations, and cross-source queries, this collaboration empowers businesses to build AI-driven applications that maximize the value of their data while enhancing efficiency and scalability.</p>
<div class="callout">
<p><b>Get started today through <a href="https://console.cloud.google.com/marketplace/product/mongodb/mdb-atlas-self-service?project=mdb-gcp-marketplace&utm_source=marketplace&utm_medium=ToolboxBlog&utm_campaign=ToolboxBlog_MDB&utm_term=mongodb" target="_blank">Google Cloud Marketplace</a>.</b></p>
<ol>
	<font size="4">
		<li>Set up <a href="https://github.com/googleapis/genai-toolbox/blob/main/docs/en/getting-started/local_quickstart_js.md" target="_blank">MCP Toolbox for Databases</a> locally. </li>
		<li>Set up <a href="https://github.com/googleapis/genai-toolbox/blob/main/docs/en/resources/sources/mongodb.md" target="_blank">MongoDB Atlas source connector</a>. </li>
		<li>And then set up <a href="https://github.com/googleapis/genai-toolbox/tree/main/docs/en/resources/tools/mongodb" target="_blank">MongoDB Atlas tools</a>.</li>
	</font>
</ol>	]]></description>
      <pubDate>Mon, 22 Sep 2025 14:00:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/innovation/simplify-ai-driven-data-connectivity-mcp-toolbox</link>
      <guid>https://www.mongodb.com/company/blog/innovation/simplify-ai-driven-data-connectivity-mcp-toolbox</guid>
    </item><item>
      <title>MongoDB Community Edition to Atlas: A Migration Masterclass With BharatPE</title>
      <description><![CDATA[<p>Launched in 2018, BharatPE is a fintech pioneer serving millions of Indian retailers and small businesses across more than 450 cities. The company processes over ₹12,000 crore (about US $1.368 billion) in monthly Unified Payments Interface (UPI)-based transactions.</p>
<p>One of BharatPE’s most innovative financial solutions is India’s first interoperable UPI QR code—a scannable 2D barcode that empowers users to make payments using the UPI system in India—and a zero-MDR (Merchant Discount Rate) payment acceptance service, which enables merchants to accept payments through the same system without any charges.</p>
<p>Behind BharatPE’s success is the ability to manage high volumes of data, maintain data security, and scale to accommodate growth and adapt to traffic peaks, all while keeping operational and maintenance burden low. This is all powered by <a href="https://www.mongodb.com/atlas">MongoDB Atlas</a>.</p>
<p>Sumit Malik, Head of Database Operations at BharatPE, <a href="https://www.youtube.com/watch?v=wHOopWPhHxI&list=PL4RCxklHWZ9tYeDD_5f41Q2pt694E2nLg&index=2">presented at MongoDB .local Delhi in July 2025</a>, sharing the company’s transformational journey from managing a <a href="https://www.mongodb.com/products/self-managed/community-edition">self-hosted version of MongoDB</a>.</p>
<iframe width="800" height="425" src="https://www.youtube.com/embed/wHOopWPhHxI?si=d0qDed3JwO1oS9vF" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
<h2>From Community Edition to Atlas: Unlocking more scale and reducing complexity</h2>
<p>BharatPE’s legacy infrastructure relied on a self-hosted version of MongoDB: <a href="https://www.mongodb.com/products/self-managed/community-edition">MongoDB Community Edition</a>. The setup included three sharded clusters, each with three nodes (one primary, two secondary), handling BharatPE’s 45 terabytes of data.</p>
<p>However, self-managing this large deployment created several challenges. Data was spread unevenly across clusters, which caused imbalances and scaling complexities. Maintaining the database also proved costly and time-consuming for the team.</p>
<p>BharatPE was also looking to expand its disaster recovery capabilities to remove business continuity and downtime risks.</p>
<p>Finally, operating in a regulated industry with   high security standards meant that BharatPE needed to create robust end-to-end security and compliance.</p>
<p>“We needed a database platform that could scale seamlessly, secure our data, and minimize operational burden,” said Malik.</p>
<p>After careful consideration and due diligence, it was determined that <a href="https://www.mongodb.com/atlas">MongoDB Atlas</a> delivered the ideal solution against BharatPE's requirements.</p>
<h2>A carefully planned, 5-step migration approach</h2>
<p>MongoDB's professional services team helps customers migrate from the self-managed version of MongoDB to MongoDB Atlas. The work we have done with many of our customers has led us to develop a methodical 5-step migration process. This approach, built around five key phases, was central to avoiding downtime and maintaining business continuity throughout BharaPE’s migration process :</p>
<ol>
	<font size="4">
		<li><b>Design phase: defining scope and strategy</b> - In the initial phase, BharatPE worked with MongoDB to lay the groundwork for the migration by clearly defining its scope, timeline, resources, and dependencies. They analyzed data volume, structure, and compatibility between the source system (self-hosted MongoDB) and the target system (MongoDB Atlas). “We carefully designed a migration strategy that accounted for every possible risk and dependency within our system,” said Malik. </li> 
		<li><b>De-risk phase: assessing and mitigating risks</b> - This phase—a core part and value of MongoDB’s approach—  focused on identifying and addressing potential risks associated with the migration. BharatPE validated application compatibility with MongoDB Atlas and assessed the suitability of its driver versions. Malik shared: “Understanding compatibility challenges early on helped us eliminate surprises during production.” </li>
		<li><b>Test phase: validating systems in lower environments</b> - Before touching the production environment, BharatPE conducted extensive testing in a development environment that closely emulated its real-world setup. “We created a fully mirrored MongoDB Atlas test environment where we integrated our existing systems and validated application sanity and compatibility,” said Malik. Introducing an additional MongoDB server allowed the team to simulate real-world scenarios and ensure readiness.</li>
		<li><b>Migration phase: data transition and security</b> - BharatPE used <a href="https://www.mongodb.com/docs/mongosync/current/reference/mongosync/">MongoDB’s mongosync tool</a> alongside the migration strategy built with the MongoDB team to migrate terabytes of data securely and efficiently. Ensuring data privacy during transit was a top priority, and the team adopted <a href="https://www.mongodb.com/products/capabilities/security/encryption">MongoDB’s robust encryption functionality</a> to protect sensitive financial information and ensure compliance.</li> 
		<li><b>Validation phase: confirming data integrity and optimizing performance</b> - Once the data was moved, BharatPE performed rigorous post-migration checks. Automated scripts were developed to validate the integrity of the migrated data, ensuring it matched the original source without discrepancies. Additionally, monitoring systems and real-time alerting were set up to catch and resolve any issues immediately.</li>
	</font>
</ol>
<p>This meticulous five-step approach, and the close partnership with   from partnering with MongoDB’s team, allowed BharatPE to transition to MongoDB Atlas without impacting its production environment, all while ensuring data security, operational continuity, and reliability.</p>
<h2>MongoDB Atlas boosts performance by 40%</h2>
<p>Since migrating to MongoDB Atlas, BharatPE has realized tangible benefits that have directly impacted its operations and customer experience.</p>
<p>“&lt;With MongoDB Atlas, we effectively reduced operational complexity and improved scalability,” Malik said. Atlas’s auto-scaling capabilities enabled BharatPE to effortlessly handle the volume spikes associated with 500M+ UPI transactions monthly.</p>
<p>Atlas’s reliability has improved availability and minimized downtime, critical to BharatPE’s 24/7 operations. “The system’s auto-failover ensures seamless service continuity, even during node failures,” said Malik. Notably, MongoDB’s SLA-guaranteed 99.995% uptime delivered improved consistency.</p>
<p>Performance enhancements have been equally transformative with a 40% improvement in query response times thanks to built-in query performance analytics. Observability dashboards and real-time alerts have enabled faster issue resolution.</p>
<p>The migration also addressed BharatPE’s security concerns. BharatPE now fully meets fintech security and compliance requirements, enabled by MongoDB’s advanced security features such as data encryption, role-based access control, and VPC peering.</p>
<p>Finally, by eliminating the complexities of self-managed infrastructure, the company has freed resources to focus on business growth and customer experience.</p>
<p>“MongoDB handles audit logs with a single click—we no longer need third-party tools or manual setups,” said Malik. “The migration has future-proofed our infrastructure while reducing costs and improving reliability.”</p>
<p>MongoDB Atlas now underpins the foundations of BharatPE’s operations, and ensures merchants can continue transacting seamlessly while enabling BharatPE to expand its offerings across India’s growing fintech landscape.</p>
<div class="callout">
<p><b>Visit the <a href="https://www.mongodb.com/resources/product/platform/atlas-learning-hub">Atlas Learning Hub</a> to learn more about Atlas and start building your MongoDB skills.</b></p>
<p><b>To learn more about MongoDB Community Edition, visit the <a href="https://www.mongodb.com/products/self-managed/community-edition">product page</a>.</b></p>
</div>	]]></description>
      <pubDate>Sun, 21 Sep 2025 23:00:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/innovation/mongodb-community-edition-to-atlas-migration-masterclass-bharatpe</link>
      <guid>https://www.mongodb.com/company/blog/innovation/mongodb-community-edition-to-atlas-migration-masterclass-bharatpe</guid>
    </item><item>
      <title>Modernizing Core Insurance Systems: Breaking the Batch Bottleneck </title>
      <description><![CDATA[<p>Modernizing your legacy database to Java + <a href="https://www.mongodb.com/products/platform">MongoDB Atlas</a> doesn’t have to mean sacrificing batch performance. By leveraging bulk operations, intelligent prefetching, and parallel execution, we built an optimization framework that not only bridges the performance gap but, in many cases, surpasses legacy systems.</p>
<p>In workloads where jobs were previously running 25–30x slower before using this framework, it brought execution times back on par and, in some cases, delivered 10–15x better performance. For global insurance platforms, significantly improved batch performance has become an added technical benefit to potentially support newer functionality.</p>
<h2>The modernization dilemma</h2>
<p>For organizations modernizing their core platforms, catering to significant user workload and revenue-generating applications, moving from a legacy RDBMS to a modern application stack with Java + MongoDB unlocks several benefits:</p>
<ul>
	<font size="4">
<li><b>Flexible document model:</b> PL/SQL code tightly couples business logic with the database, making even small changes risky and time-consuming. <a href="https://www.mongodb.com/products/platform">MongoDB Atlas</a>, with its flexible document model and application-driven logic, enables teams to evolve schemas and processes quickly, a huge advantage for industries like insurance, where regulations, products, and customer expectations change rapidly.</li>
<li><b>Scalability and resilience:</b> Legacy RDBMS platforms were never designed for today’s scale of digital engagement. MongoDB’s distributed architecture supports <a href="https://www.mongodb.com/resources/basics/horizontal-vs-vertical-scaling">horizontal scale-out</a>, ensuring that core insurance workloads can handle growing customer bases, high-volume claims, and peak-time spikes without major redesigns.</li>
<li><b>Cloud-native by design:</b> MongoDB is built to thrive in the cloud. Features like global clusters, built-in replication, and high availability with reduced infrastructure complexity, while enabling deployment flexibility across hybrid and multi-cloud environments.</li>
<li><b>Modern developer ecosystem:</b> Decouples database and business logic dependencies, accelerating feature delivery.</li>
<li><b>Unified operational + analytical workloads:</b> Modern insurance platforms demand more than transactional processing; they require real-time insights. MongoDB’s ability to support both operational workloads and <a href="https://www.mongodb.com/resources/basics/real-time-analytics-examples">analytics</a> on live data reduces the gap between claims processing and decision-making.</li>
	</font>
</ul>
<p>However, alongside these advantages, one of the first hurdles they encounter is batch jobs performance, the jobs that are meant to run daily/weekly/monthly, like an ETL process.</p>
<p>PL/SQL thrives on set-based operations within the database engine. But when the same workloads are reimplemented with a separate application layer and MongoDB, they can suddenly become unpredictable, slow, and even time out. In some cases, processes that ran smoothly for years started running 25–30x slower after a like-for-like migration. The majority of the issues can be factored into the following broad categories:</p>
<ul>
	<font size="4">
		<li>High network round-trips between the application and the database.</li>
		<li>Inefficient per-record operations replacing set-based logic.</li>
		<li>Under-utilization of database bulk capabilities.</li>
		<li>Application-layer computation overhead when transforming large datasets.</li>
	</font>
</ul>
<p>For teams migrating complex ETL-like processes, this wasn’t just a technical nuisance—it became a blocker for modernization at scale.</p>
<h2>The breakthrough: A batch job optimization framework</h2>
<p>We designed an extensible, multi-purpose &amp; resilient batch optimization framework purpose-built for high-volume, multi-collection operations in MongoDB. The framework focuses on minimising application-database friction while retaining the flexibility of Java services.</p>
<p>Key principles include:</p>
<ol>
	<font size="4">
		<li><b>Bulk operations at scale:</b> Leveraging MongoDB’s native ```bulkWrite``` (including multi-collection bulk transactions in MongoDB 8) to process thousands of operations in a single round trip.</li>
		<li><b>Intelligent prefetching:</b> Reducing repeated lookups by pre-loading and caching reference data in memory-friendly structures.</li>
		<li><b>Parallel processing:</b> Partitioning workloads across threads or event processors (e.g., Disruptor pattern) for CPU-bound and I/O-bound steps.</li>
		<li><b>Configurable batch sizes:</b> Dynamically tuning batch chunk sizes to balance memory usage, network payload size, and commit frequency.</li>
		<li><b>Pluggable transformation modules:</b> Modularized data transformation logic that can be reused across multiple processes.</li>
	</font>
</ol>
<h2>Technical architecture</h2>
<p>The framework adopts a layered and orchestrated approach to batch job processing, where each component has a distinct responsibility in the end-to-end workflow. The diagram illustrates the flow of a batch execution:</p>
<ol>
	<font size="4">
		<li><b>Trigger (user / cron job):</b> The batch process begins when a user action or a scheduled cron job triggers the Spring Boot controller.</li>
		<li>Spring boot controller:</b> The controller initiates the process by fetching the relevant records from the database. Once retrieved, it splits the records into batches for parallel execution.</li>
		<li><b>Database:</b> Acts as the source of truth for input data and the destination for processed results. It supports both reads (to fetch records) and writes (to persist batch outcomes).</li>
		<li><b>Executor framework:</b> This layer is responsible for parallelizing workloads. It distributes batched records, manages concurrency, and invokes ETL tasks efficiently.</li>
		<li><b>ETL process:</b> The ETL (Extract, Transform, Load) logic is applied to each batch. Data is pre-fetched, transformed according to business rules, and then loaded back into the database.</li>
		<li><b>Completion & write-back:</b> Once ETL operations are complete, the executor framework coordinates database write operations and signals the completion of the batch.</li>
		</font>
</ol>
<center><caption><b>Figure 1.</b> The architecture for the layered approach.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-09-17 at 1.11.35 PM-gvslu7l99i.png" alt="Diagram showing the architecture. Starting with users, the architecture progresses through to the ETL process." title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<h2>From bottleneck to advantage</h2>
<p>The results were striking. Batch jobs that previously timed out are now completed predictably within defined SLAs, and workloads that had initially run 25–30x slower after migration were optimized to perform on par with legacy RDBMSs and in several cases even deliver 10–15x better performance. What was once a bottleneck became a competitive advantage, proving that batch processing on MongoDB can significantly outperform legacy PL/SQL when implemented with the right optimization framework.</p>
<h2>Caveats and tuning tips</h2>
<p>While the framework is adaptable, its performance depends on workload characteristics and infrastructure limits:</p>
<ul>
	<font size="4">
		<li><b>Batch size tuning:</b> Too large can cause memory pressure; too small increases round-trips.</li>
		<li><b>Transaction boundaries:</b> MongoDB transactions have limits (document size, total operations), plan batching accordingly.</li>
		<li><b>Thread pool sizing:</b> Over-parallelization can overload the database or network.</li>
		<li><b>Index strategy:</b> Even with bulk writes, poor indexing can cause slowdowns.</li>
		<li><b>Prefetch scope:</b> Balance memory usage against lookup frequency.</li>
	</font>
</ul>	
<p>In short, it’s not one size fits all. Every workload is different, the data you process, the rules you apply, and the scale you run at all shape how things perform. What we’ve seen though is that with the right tuning, this framework can handle scale reliably and take batch processing from being a pain point to something that actually gives you an edge.</p>
<p>If you’re exploring how to modernize your own workloads, this approach is a solid starting point. You can pick and choose the parts that make sense for your setup, and adapt as you go.</p>
<div class="callout">
<p><b>Ready to modernize your applications? Visit the <a href="https://www.mongodb.com/solutions/use-cases/modernize">modernization page</a> to learn about the MongoDB Application Platform.</b></p>
</div>	]]></description>
      <pubDate>Thu, 18 Sep 2025 15:00:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/technical/modernizing-core-insurance-systems-breaking-batch-bottleneck</link>
      <guid>https://www.mongodb.com/company/blog/technical/modernizing-core-insurance-systems-breaking-batch-bottleneck</guid>
    </item><item>
      <title>MongoDB.local NYC 2025: Defining the Ideal Database for the AI Era</title>
      <description><![CDATA[<p>Yesterday, we welcomed thousands of developers and executives to MongoDB.local NYC, the latest stop in our global .local series. Over the past year, we’ve connected with tens of thousands of partners and customers in 20 cities worldwide. But it’s especially meaningful to be in New York—where MongoDB was founded and where we are still headquartered.</p>
<div class="callout">
<p><b>This post is also available in: <a href="https://www.mongodb.com/company/blog/events/local-nyc-2025-defining-ideal-database-for-ai-era-de" target="_blank">Deutsch</a>, <a href="https://www.mongodb.com/company/blog/events/local-nyc-2025-defining-ideal-database-for-ai-era-fr" target="_blank">Français</a>, <a href="https://www.mongodb.com/company/blog/events/local-nyc-2025-defining-ideal-database-for-ai-era-es" target="_blank">Español</a>, <a href="https://www.mongodb.com/company/blog/events/local-nyc-2025-defining-ideal-database-for-ai-era-br" target="_blank">Português</a>, <a href="https://www.mongodb.com/company/blog/events/local-nyc-2025-defining-ideal-database-for-ai-era-it" target="_blank">Italiano</a>, <a href="https://www.mongodb.com/company/blog/events/local-nyc-2025-defining-ideal-database-for-ai-era-kr" target="_blank">한국어</a>, <a href="https://www.mongodb.com/company/blog/events/local-nyc-2025-defining-ideal-database-for-ai-era-cn" target="_blank">简体中文</a>.</b></p>
</div>	
<p>During the event, we introduced new capabilities that advance MongoDB’s position as the world’s leading modern database. With MongoDB 8.2, our most feature-rich and performant release yet, we are raising the bar for what developers can achieve. We also shared more about our <a href="https://www.mongodb.com/products/platform/ai-search-and-retrieval">Voyage AI</a> embedding models and rerankers, which bring state-of-the-art accuracy and efficiency to building trustworthy, reliable AI applications. And with <a href="https://www.mongodb.com/company/blog/product-release-announcements/supercharge-self-managed-apps-search-vector-search-capabilities">Search and Vector Search now in public preview for both MongoDB Community Edition and Enterprise Server</a>, we are putting powerful retrieval capabilities directly into customers’ environments—wherever they prefer to run.</p>
<p>I am particularly excited about the launch of the <a href="https://www.mongodb.com/company/blog/product-release-announcements/amp-ai-driven-approach-modernization">MongoDB Application Modernization Platform</a>, or AMP. Enterprises everywhere are grappling with the massive costs of legacy systems that cannot support the demands of AI. AMP is not a simple “lift-and-shift.” It is a repeatable, end-to-end platform that combines AI-powered tooling, proven techniques, and specialized talent to reinvent critical business systems while minimizing cost and risk. Early results are impressive: enterprises moving from old systems to MongoDB are doing so two to three times faster, and tasks like code rewriting are accelerating by an order of magnitude.</p>
<center><caption><b>Figure 1.</b> MongoDB.local NYC keynote.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-09-17 at 4.12.30 PM-000u0klcze.png" alt="Photo of the stage for the .local NYC keynote" title=" " style="width: 800px"/>
</div> 
	<figcaption class="fl-center">Watch the&nbsp; <a href="https://www.youtube.com/live/LE8hUTNd2Go?si=GcdvxVoX1w3DRCcg">full keynote</a> &nbsp;on YouTube. </figcaption>
</figure>
<h2>Becoming the world’s most popular modern database</h2>
<p>When I reflect on MongoDB’s journey, I’m struck by how far we’ve come. When I joined just over a decade ago, we had only a few thousand customers. Today, MongoDB serves nearly 60,000 organizations across every industry and vertical, including more than 70% of the Fortune 100 and cutting-edge AI-native startups.</p>
<p>Yet the reason behind our growth remains the same. Relational databases built in the 1970s were never designed for the scale and complexity of modern applications. They were rigid, hard to scale, and slow to adapt. Our founders, who had lived those limitations first-hand while building DoubleClick, set out to create something better: a database model designed for the realities of the modern world. The document model was born.</p>
<p>Based in JSON, the <a href="https://www.mongodb.com/resources/basics/databases/document-databases">document model</a> is intuitive, flexible, and powerful. It allows developers to represent complex, interdependent, and constantly changing data in a natural way. And, as we enter the era of AI, those same qualities—adaptability, scalability, and security—are more critical than ever. The database a company chooses will be one of the most strategic decisions determining the success of its AI initiatives.</p>
<p>Generative AI applications have already begun delivering productivity gains, writing code, drafting documents, and answering questions. But the real transformation lies ahead with <a href="https://www.mongodb.com/resources/basics/artificial-intelligence/ai-agents">agentic AI</a>—applications that perceive, decide, and act. These intelligent agents don’t just follow workflows; they pursue outcomes, reasoning about the best steps to achieve them. And in that loop, the database is indispensable. It provides the memory that allows agents to perceive context, the facts that allow them to decide intelligently, and the state that will enable them to act coherently.</p>
<p>This is why a company’s data is its most valuable asset. <a href="https://www.mongodb.com/resources/basics/artificial-intelligence/large-language-models">Large language models</a> (LLMs) may generate responses, but it is the database that provides continuity, collaboration, and true intelligence. The future of AI is not only about reasoning—it is about context, memory, and the power of your data.</p>
<h2>The ideal database for transformative AI</h2>
<p>So what does the ideal database for agentic AI look like? It must reflect today’s complexity and tomorrow’s change. It must speak the language of AI, which is increasingly JSON. It must integrate advanced retrieval across raw data, metadata, and embeddings—not just exact matching but meaning and intent. It must bridge private data and LLMs with the highest-quality embeddings and rerankers. And it must deliver the performance, scalability, and security required to power mission-critical applications at a global scale.</p>
<p>This is precisely what MongoDB delivers. We don’t simply check the boxes on this list—we define them.</p>
<h2>We’re only just getting started</h2>
<p>That’s why I am so optimistic about our future. The energy and creativity we see at every MongoDB.local event remind me of the passion that has always fueled this company. As our customers continue to innovate, I know MongoDB is in the perfect position to help them succeed in the AI era.</p>
<p>We can’t wait to see what you build next.</p>
<div class="callout">
<p><b>To see more announcements and for the latest product updates, visit our <a href="http://www.mongodb.com/new">What’s New</a> page. And head to the <a href="https://www.mongodb.com/events/mongodb-local">MongoDB.local hub</a> to see where we’ll be next.</b></p>
</div>	]]></description>
      <pubDate>Thu, 18 Sep 2025 12:00:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/events/local-nyc-2025-defining-ideal-database-for-ai-era</link>
      <guid>https://www.mongodb.com/company/blog/events/local-nyc-2025-defining-ideal-database-for-ai-era</guid>
    </item><item>
      <title>MongoDB.local NYC 2025: Definiendo la base de datos ideal para la era de la IA</title>
      <description><![CDATA[<p>Ayer, dimos la bienvenida a miles de desarrolladores y ejecutivos a MongoDB.local NYC, la última parada en nuestro recorrido global .local series. Durante el último año, nos hemos conectado con decenas de miles de Emparejar y cliente en 20 ciudades de todo el mundo. Sin embargo, es especialmente significativo estar en Nueva York, donde se fundó MongoDB y donde todavía tenemos nuestra sede.</p>
<p>Durante el evento, presentamos nuevas capacidades que refuerzan la posición de MongoDB como la base de datos moderna líder a nivel mundial. Con MongoDB 8.2, nuestra versión más rica en características y de alto rendimiento hasta el momento, estamos elevando el estándar de lo que los desarrolladores pueden lograr. También compartimos más sobre nuestros modelos de incrustación y reclasificación de <a href="https://www.mongodb.com/products/platform/ai-search-and-retrieval">Voyage IA</a>, que aportan precisión y eficiencia de vanguardia a la creación de aplicaciones de IA confiables y fiables. Y con <a href="https://www.mongodb.com/company/blog/product-release-announcements/supercharge-self-managed-apps-search-vector-search-capabilities">la búsqueda y la búsqueda vectorial ahora en vista previa pública tanto para MongoDB Community Edition como para Enterprise Server</a>, estamos incorporando potentes capacidades de recuperación directamente en los entornos de los clientes, dondequiera que prefieran ejecutar.</p>
<p>Estoy particularmente entusiasmado con el <a href="https://www.mongodb.com/company/blog/product-release-announcements/amp-ai-driven-approach-modernization">lanzamiento de la Plataforma de Modernización de Aplicaciones de MongoDB</a>, o AMP. Las empresas de todo el mundo están lidiando con los costos masivos de los sistemas heredados que no pueden soportar las demandas de la IA. AMP no es un simple “traslado y cambio”. Es una plataforma repetible de extremo a extremo que combina herramientas impulsadas por IA, técnicas probadas y talento especializado para reinventar los sistemas críticos de negocio mientras se minimizan los costos y riesgos. Los primeros resultados son impresionantes: las empresas que pasan de sistemas antiguos a MongoDB lo están haciendo dos o tres veces más rápido, y tareas como la reescritura de código se están acelerando en un orden de magnitud.</p>
<center><caption><b>Figura 1.</b> Ponencia de MongoDB.local NYC.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-09-17 at 4.12.30 PM-000u0klcze.png" alt="Photo of the stage for the .local NYC keynote" title=" " style="width: 800px"/>
</div> 
	<figcaption class="fl-center">Vea la presentación completa en&nbsp;<a href="https://www.youtube.com/live/LE8hUTNd2Go?si=GcdvxVoX1w3DRCcg">YouTube</a>.  </figcaption>
</figure>
<h2>Convertirse en la base de datos moderna más popular del mundo</h2>
<p>Cuando reflexiono sobre el recorrido de MongoDB, me sorprende lo lejos que hemos llegado. Cuando me uní hace poco más de una década, solo teníamos unos pocos miles de clientes. Hoy en día, MongoDB sirve a casi 60,000 Organizaciones en todas las industrias y sectores, incluyendo más del 70% de las Fortune 100 y empresas emergentes nativas de IA de vanguardia.</p>
<p>Sin embargo, la razón detrás de nuestro crecimiento sigue siendo la misma. Las relational database creadas en la década de 1970 nunca fueron diseñadas para escalar y la complejidad de las aplicación modernas. Eran rígidos, difíciles de escalar y lentos para adaptarse. Nuestros fundadores, que experimentaron esas limitaciones de primera mano mientras construían DoubleClick, se propusieron crear algo mejor: un modelo de base de datos diseñado para las realidades del mundo moderno. Había nacido el document model.</p>
<p>Basado en JSON, el <a href="https://www.mongodb.com/resources/basics/databases/document-databases">document model</a> es intuitivo, flexible y potente. Permite a los desarrolladores representar datos complejos, interdependientes y en constante cambio de manera natural. Y, a medida que ingresamos en la era de la IA, esas mismas cualidades—adaptabilidad, escalabilidad y seguridad—son más críticas que nunca. La base de datos que elija una empresa será una de las decisiones más estratégicas que determinarán el éxito de sus iniciativas de IA.</p>
<p>Las aplicaciones de IA generativa ya han comenzado a proporcionar mejoras en la productividad, escribiendo código, redactando documentos y respondiendo preguntas. Pero la verdadera transformación está por delante con la <a href="https://www.mongodb.com/resources/basics/artificial-intelligence/ai-agents">IA agente</a>: aplicaciones que perciben, deciden y actúan. Estos agentes inteligentes no solo siguen los flujos de trabajo; persiguen resultados, razonando sobre los mejores pasos para lograrlos. Y en ese ciclo, la base de datos es indispensable. Proporciona la memoria que permite a los agentes percibir el contexto, los hechos que les permiten decidir de manera inteligente, y el estado que les permitirá actuar de forma coherente.</p>
<p>Por eso, los datos de una empresa son su activo más valioso. <a href="https://www.mongodb.com/resources/basics/artificial-intelligence/large-language-models">Los grandes modelos de lenguaje</a> (LLM) pueden generar respuestas, pero es la base de datos la que proporciona continuidad, colaboración y verdadera inteligencia. El futuro de la IA no solo se trata de razonamiento, sino de contexto, memoria y el poder de sus datos.</p>
<h2>La base de datos ideal para la IA transformativa</h2>
<p>Entonces, ¿cómo debería ser la base de datos ideal para la IA agencial? Debe reflejar la complejidad de hoy y el cambio del mañana. Debe hablar el lenguaje de la IA, que cada vez más es JSON. Debe integrar la recuperación avanzada a través de datos sin procesar, metadatos e incrustaciones, no solo de coincidencia exacta, sino también de significado e intención. Debe conectar los datos privados y los LLM con incrustaciones y rerankeadores de la más alta calidad. Y debe ofrecer el rendimiento, la escalabilidad y la seguridad necesarios para impulsar aplicaciones de misión crítica a escala global.</p>
<p>Esto es precisamente lo que ofrece MongoDB. No nos limitamos a marcar las casillas de esta lista; las definimos.</p>
<h2>Apenas estamos comenzando</h2>
<p>Por eso soy tan optimista sobre nuestro futuro. La energía y la creatividad que vemos en cada evento de MongoDB.local me recuerdan la pasión que siempre ha impulsado a esta empresa. A medida que nuestros clientes continúan innovando, sé que MongoDB está en la posición perfecta para ayudarles a tener éxito en la era de la IA.</p>
<p>¡Estamos ansiosos por ver qué desarrollas a continuación!</p>
<div class="callout">
<p><b>Para ver más anuncios y obtener las últimas actualizaciones de productos, visite <a href="http://www.mongodb.com/new">nuestra página Novedades</a>. Y diríjase al <a href="https://www.mongodb.com/events/mongodb-local">centro MongoDB.local</a> para ver dónde estaremos siguiente.</b></p>
</div>	]]></description>
      <pubDate>Thu, 18 Sep 2025 12:00:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/events/local-nyc-2025-defining-ideal-database-for-ai-era-es</link>
      <guid>https://www.mongodb.com/company/blog/events/local-nyc-2025-defining-ideal-database-for-ai-era-es</guid>
    </item><item>
      <title>MongoDB.local NYC 2025：定义 AI 时代的理想数据库</title>
      <description><![CDATA[<p>昨天，我们迎来了数千名开发者和高管参加 MongoDB.local NYC，这是我们全球 .local 系列的最新一站。在过去的一年里，我们与全球 20 个城市的数万名合作伙伴和客户建立了联系。而纽约更是意义重大，因为这里是 MongoDB 的诞生地，也是我们的总部所在地。</p>
<p>活动期间，我们推出了新功能，进一步巩固了 MongoDB 作为全球领先现代数据库的地位。MongoDB 8.2 是我们迄今为止功能最丰富、性能最高的版本，我们提高了开发者的能力标准。我们还分享了更多关于 <a href="https://www.mongodb.com/products/platform/ai-search-and-retrieval">Voyage AI</a> 嵌入模型和重排序器的信息，它们为构建值得信赖、可靠的 AI 应用程序带来了最先进的准确性和效率。目前，<a href="https://www.mongodb.com/company/blog/product-release-announcements/supercharge-self-managed-apps-search-vector-search-capabilities">MongoDB Community Edition 和 Enterprise Server 的搜索和向量搜索功能已进入公开预览阶段</a>，我们将强大的检索功能直接应用到客户环境中，无论他们喜欢在哪里运行。</p>
<p>我对 MongoDB 应用程序现代化平台 (<a href="https://www.mongodb.com/company/blog/product-release-announcements/amp-ai-driven-approach-modernization">AMP</a>) 的推出感到特别兴奋。世界各地的企业都在努力应对旧版系统带来的巨额成本问题，因为这些系统无法支持 AI 的需求。AMP 不是简单的“搬运”，而是可重复的端到端平台，结合 AI 驱动的工具、成熟的技术和专业的人才，重塑关键业务系统，同时最大限度地降低成本和风险。早期成果令人印象深刻：企业从旧系统迁移到 MongoDB 的速度提高了两到三倍，代码重写等任务的速度也提高了一个数量级。</p>
<center><caption>图 1.MongoDB.local NYC 主题演讲。</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-09-17 at 4.12.30 PM-000u0klcze.png" alt="Photo of the stage for the .local NYC keynote" title=" " style="width: 800px"/>
</div> 
	<figcaption class="fl-center"><a href="https://www.youtube.com/live/LE8hUTNd2Go?si=GcdvxVoX1w3DRCcg">在 YouTube 上观看完整的主题演讲。</a></figcaption>
</figure>
<h2>成为全球最受欢迎的现代数据库</h2>
<p>当我回顾 MongoDB 的历程时，我对我们所取得的进步感到震惊。十多年前我加入公司时，我们的客户数量只有几千名。如今， MongoDB 为各行业和垂直领域的近 60,000 个组织提供服务，其中包括超过 70% 的 Fortune 100 强企业和尖端 AI 初创企业。</p>
<p>然而，我们成长背后的原因却始终如一。关系数据库诞生于 20 世纪 70 年代，其设计初衷早已无法匹配现代应用程序的扩展性与复杂性。它们死板，难以扩展，适应速度慢。我们的创始人在构建 DoubleClick 时曾深切体会这些局限，从而立志创造更卓越的解决方案：一种专为现代世界现实而设计的数据库模型。文档模型诞生了。</p>
<p><a href="https://www.mongodb.com/resources/basics/databases/document-databases">文档模型</a>基于 JSON，直观、灵活且功能强大。它允许开发者以自然的方式表示复杂、相互依赖且不断变化的数据。而且，随着我们步入 AI 时代，这些品质——适应性、可扩展性与安全性——其重要性空前凸显。公司选择的数据库将是决定其 AI 计划成功与否的最具战略性的决策之一。</p>
<p>生成式人工智能应用程序已经开始提高工作效率，编写代码、草拟文档和回答问题。但真正的变革在于<a href="https://www.mongodb.com/resources/basics/artificial-intelligence/ai-agents">代理 AI</a>——能够感知、决策和行动的应用程序。这些智能代理不仅仅遵循工作流程；它们还追求结果，推理实现结果的最佳步骤。在此过程中，数据库不可或缺。凭借其提供的记忆，代理得以感知上下文；依据提供的事实，从而做出明智决策；再根据当前状态，最终执行出连贯的行动。</p>
<p>这就是为什么公司的数据是其最宝贵的资产。<a href="https://www.mongodb.com/resources/basics/artificial-intelligence/large-language-models">大型语言模型</a> (LLM) 可能会生成响应，但提供连续性、协作和真正智能的是数据库。AI 的未来不仅与推理有关，还与上下文、记忆和数据的力量有关。</p>
<h2>变革性 AI 的理想数据库</h2>
<p>那么，适合代理 AI 的理想数据库是什么样的？必须反映当今的复杂性和未来变化。必须使用 AI 语言，而这种语言越来越多地使用 JSON。必须集成跨原始数据、元数据和嵌入的高级检索——不仅仅是精确匹配，还包括意义和意图。必须通过最高质量的嵌入和重排序器来桥接私人数据和大型语言模型。而且必须提供在全球范围内支持关键任务应用程序所需的性能、可扩展性和安全性。</p>
<p>这正是 MongoDB 所提供的。我们不仅仅是在列表上打勾，而是定义这些标准。</p>
<h2>我们才刚刚入门。</h2>
<p>这就是为什么我对我们的未来如此乐观。我们在每一次 MongoDB.local 活动中亲眼见证的活力与创造力，让我感受到一种与公司发展驱动力一脉相承的激情。随着客户持续推动创新，我确信 MongoDB 能为他们提供在 AI 时代取得成功所需的关键基石。</p>
<p>未来可期，我们携手见证。</p>
<div class="callout">
<p>如需查看更多公告并获取最新产品动态，请访问我们的“<a href="http://www.mongodb.com/new">最新动态”页面</a>。前往 <a href="https://www.mongodb.com/events/mongodb-local">MongoDB.local</a> 中心，查看我们的下一个行程。</p>
</div>	]]></description>
      <pubDate>Thu, 18 Sep 2025 12:00:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/events/local-nyc-2025-defining-ideal-database-for-ai-era-cn</link>
      <guid>https://www.mongodb.com/company/blog/events/local-nyc-2025-defining-ideal-database-for-ai-era-cn</guid>
    </item><item>
      <title>MongoDB.local NYC 2025 : définir la base de données idéale à l&#39;ère de l&#39;IA</title>
      <description><![CDATA[<p>Hier, nous avons accueilli des milliers de développeurs et de cadres à MongoDB.local NYC, la dernière étape de notre tournée mondiale .local. Cette année, nous avons rencontré des dizaines de milliers de partenaires et de clients dans 20 villes à travers le monde. Mais pour nous, être à New York a une saveur particulière, car c'est la ville où MongoDB a été fondée et où nous avons toujours notre siège social.</p>
<p>Lors de cet événement, nous avons présenté de nouvelles fonctionnalités qui renforcent la position de MongoDB en tant que principale base de données moderne au monde. Avec MongoDB 8.2, notre version la plus riche en fonctionnalités et la plus performante à ce jour, nous mettons la barre encore plus haut sur ce que les développeurs peuvent accomplir. Nous avons également partagé plus d'informations sur nos modèles d'intégration et de reclassement <a href="https://www.mongodb.com/products/platform/ai-search-and-retrieval">Voyage AI</a>, qui apportent une précision et une efficacité de pointe à la création d'applications d'IA fiables. Et alors que <a href="https://www.mongodb.com/company/blog/product-release-announcements/supercharge-self-managed-apps-search-vector-search-capabilities">Search et Vector Search sont désormais disponibles au public pour MongoDB Community Edition et Enterprise Server</a>, nous mettons de puissantes capacités de récupération directement dans les environnements des clients, quels qu'ils soient.</p>
<p>Je suis particulièrement ravi du lancement de la <a href="https://www.mongodb.com/company/blog/product-release-announcements/amp-ai-driven-approach-modernization">plateforme de modernisation des applications de MongoDB</a>, ou AMP. Partout, les entreprises sont confrontées aux coûts considérables des systèmes hérités qui ne peuvent pas supporter les exigences de l'IA. L’AMP n’est pas un simple « lift-and-shift ». Il s'agit d'une plateforme reproductible et de bout en bout qui combine des outils alimentés par l'IA, des techniques éprouvées et des talents spécialisés pour réinventer les systèmes d'entreprise critiques tout en minimisant les coûts et les risques. Les premiers résultats sont impressionnants : les entreprises qui migrent d’anciens systèmes vers MongoDB le font deux à trois fois plus vite, et des tâches comme la réécriture de code gagnent en rapidité.</p>
<center><caption><b>Figure 1.</b> Conférence keynote de MongoDB.local NYC.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-09-17 at 4.12.30 PM-000u0klcze.png" alt="Photo of the stage for the .local NYC keynote" title=" " style="width: 800px"/>
</div> 
	<figcaption class="fl-center">Regardez l'intégralité de la&nbsp;<a href="https://www.youtube.com/live/LE8hUTNd2Go?si=GcdvxVoX1w3DRCcg">keynote</a>&nbsp;sur YouTube.</a> 
	</figcaption>
</figure>
<h2>Devenir la base de données moderne la plus populaire au monde</h2>
<p>Lorsque je repense au parcours de MongoDB, je suis frappé par tout le chemin parcouru. Lorsque j'ai rejoint l'entreprise il y a un peu plus de dix ans, nous n'avions que quelques milliers de clients. Aujourd'hui, MongoDB est utilisé par près de 60 000 organisations tous secteurs confondus, dont plus de 70 % des entreprises du Fortune 100 et des startups de pointe natives de l'IA.</p>
<p>Pourtant, la raison de notre croissance reste la même. Les relational database créées dans les années 1970 n’ont jamais été conçues pour l'envergure et la complexité des applications modernes. Elles étaient rigides, difficiles à répartir et lentes à s’adapter. Nos fondateurs, qui avaient connu ces limitations lors de la création de DoubleClick, se sont mis en tête de créer quelque chose de mieux : un modèle de base de données conçu pour les réalités du monde moderne. Le document model est né.</p>
<p>Basé sur JSON, le <a href="https://www.mongodb.com/resources/basics/databases/document-databases">document model</a> est intuitif, flexible et puissant. Il permet aux développeurs de représenter de manière naturelle des données complexes, interdépendantes et en constante évolution. Et, alors que nous entrons dans l'ère de l'IA, ces mêmes qualités — adaptabilité, évolutivité et sécurité — sont plus essentielles que jamais. La base de données choisie par une entreprise sera l'une des décisions les plus stratégiques pouvant déterminer la réussite de ses initiatives en matière d'IA.</p>
<p>Les applications d'IA générative ont déjà commencé à générer des gains de productivité, en écrivant du code, en rédigeant des documents et en répondant aux questions. Mais la véritable transformation se trouve dans <a href="https://www.mongodb.com/resources/basics/artificial-intelligence/ai-agents">l'IA agentique</a> — des applications qui perçoivent, décident et agissent. Ces agents intelligents ne suivent pas seulement les flux de travail ; ils recherchent des résultats et réfléchissent aux meilleures étapes pour les atteindre. Et dans cette boucle, la base de données est indispensable. Elle fournit la mémoire qui permet aux agents de percevoir le contexte, les faits qui leur permettent de décider intelligemment et l'état qui leur permettra d'agir de manière cohérente.</p>
<p>C'est pourquoi les données d'une entreprise sont l'actif le plus précieux qu'elle ait. Les <a href="https://www.mongodb.com/resources/basics/artificial-intelligence/large-language-models">Large Language Models</a> (grands modèles de langage) peuvent générer des réponses, mais c'est la base de données qui assure la continuité, la collaboration et la véritable intelligence. L'avenir de l'IA ne se limite pas au raisonnement, il concerne également le contexte, la mémoire et la puissance de vos données.</p>
<h2>La base de données idéale pour une IA transformatrice</h2>
<p>À quoi ressemble donc la base de données idéale pour l'IA agentique ? Elle doit refléter la complexité d'aujourd'hui et les changements de demain. Elle doit parler la langue de l’IA, qui est de plus en plus JSON. Elle doit intégrer une récupération avancée des données brutes, des métadonnées et des intégrations, non seulement les correspondances exactes, mais aussi la signification et l'intention. Elle doit faire le lien entre les données privées et les LLM avec des intégrations et des reclassements de la plus haute qualité. Et elle doit offrir les performances, l’évolutivité et la sécurité nécessaires pour alimenter les applications critiques à l’échelle mondiale.</p>
<p>C'est précisément ce que propose MongoDB. Nous ne nous contentons pas de cocher les cases de cette liste — nous les créons.</p>
<h2>Et ce n'est que le début</h2>
<p>C’est pourquoi l'avenir me rend très optimiste. L’énergie et la créativité que nous voyons à chaque événement MongoDB.local me rappellent la passion qui a toujours alimenté cette entreprise. Alors que nos clients continuent d’innover, je sais que MongoDB est idéalement placée pour les aider à réussir à l’ère de l’IA.</p>
<p>Nous avons hâte de voir vos créations !</p>
<div class="callout">
<p><b>Pour voir d’autres annonces et consulter les dernières mises à jour de produits, accédez à notre page « <a href="http://www.mongodb.com/new">Nouveautés</a> ». Et rendez-vous sur le hub <a href="https://www.mongodb.com/events/mongodb-local">MongoDB.local</a> pour voir où nous serons prochainement.</b></p>
</div>	]]></description>
      <pubDate>Thu, 18 Sep 2025 12:00:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/events/local-nyc-2025-defining-ideal-database-for-ai-era-fr</link>
      <guid>https://www.mongodb.com/company/blog/events/local-nyc-2025-defining-ideal-database-for-ai-era-fr</guid>
    </item><item>
      <title>MongoDB.local NYC 2025: Definindo o Banco de Dados Ideal para a Era da IA</title>
      <description><![CDATA[<p>Ontem, demos as boas-vindas a milhares de desenvolvedores e executivos no MongoDB.local NYC, a mais recente parada do nosso .local séries. Ao longo do último ano, conectamos-nos com dezenas de milhares de parceiros e clientes em 20 cidades ao redor do mundo. Mas é especialmente significativo estar em Nova York, onde o MongoDB foi fundado e onde ainda temos nossa sede.</p>
<p>Durante o evento, apresentamos novas capacidades que reforçam a posição do MongoDB como o principal banco de dados moderno do mundo. Com o MongoDB 8.2, nossa versão mais rica em recursos e de melhor desempenho até agora, estamos elevando o padrão do que os desenvolvedores podem alcançar. Também compartilhamos mais sobre nossos modelos de incorporação e reclassificação da <a href="https://www.mongodb.com/products/platform/ai-search-and-retrieval">Voyage AI</a>, que trazem precisão e eficiência de última geração para a criação de aplicativos de IA confiáveis e seguros. E com a <a href="https://www.mongodb.com/company/blog/product-release-announcements/supercharge-self-managed-apps-search-vector-search-capabilities">pesquisa e a pesquisa vetorial agora em visualização pública para o MongoDB Community Edition e o Enterprise Server</a>, estamos colocando poderosas funcionalidades de recuperação diretamente nos ambientes dos clientes—onde quer que eles prefiram executar.</p>
<p>Estou particularmente entusiasmado com o lançamento da <a href="https://www.mongodb.com/company/blog/product-release-announcements/amp-ai-driven-approach-modernization">Plataforma de Modernização de Aplicativos MongoDB</a>, ou AMP. Empresas em todo o mundo estão lidando com os custos massivos dos sistemas legados que não conseguem dar suporte às demandas da IA. AMP não é um simples &quot;lift-and-shift&quot;. É uma plataforma repetível e de ponta a ponta que combina ferramentas impulsionadas por IA, técnicas comprovadas e talento especializado para reinventar sistemas críticos de negócios, minimizando custos e riscos. Os primeiros resultados são impressionantes: as empresas que estão migrando de sistemas antigos para o MongoDB estão fazendo isso de duas a três vezes mais rápido, e tarefas como a reescrita de código estão acelerando em uma ordem de magnitude.</p>
<center><caption><b>Figura 1.</b> Palestra principal do MongoDB.local NYC.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-09-17 at 4.12.30 PM-000u0klcze.png" alt="Photo of the stage for the .local NYC keynote" title=" " style="width: 800px"/>
</div> 
	<figcaption class="fl-center">Assista à palestra completa no&nbsp;<a href="https://www.youtube.com/live/LE8hUTNd2Go?si=GcdvxVoX1w3DRCcg">YouTube</a>. </figcaption>
</figure>
<h2>Tornar-se o banco de dados moderno mais popular do mundo.</h2>
<p>Ao refletir sobre a jornada do MongoDB, fico impressionado com o quanto avançamos. Quando entrei há pouco mais de uma década, tínhamos apenas alguns milhares de clientes. Hoje, o MongoDB serve a quase 60.000 organizações em todos os setores e verticais, incluindo mais de 70% das empresas da Fortune 100 e inicializações nativas de IA de ponta.</p>
<p>No entanto, a razão por trás do nosso crescimento permanece a mesma. Os relational database criados na década de 1970 nunca foram projetados para dimensionar a escala e complexidade dos aplicativo modernos. Eles eram rígidos, difíceis de dimensionar e lentos para se adaptar. Nossos fundadores, que vivenciaram essas limitações em primeira mão ao construir a DoubleClick, decidiram criar algo melhor: um modelo de banco de dados projetado para as realidades do mundo moderno. O modelo de documento foi criado.</p>
<p>Baseado em JSON, o <a href="https://www.mongodb.com/resources/basics/databases/document-databases">modelo de documento</a> é intuitivo, flexível e poderoso. Ele permite que os desenvolvedores representem dados complexos, interdependentes e em constante mudança de maneira natural. E, ao entrarmos na era da IA, essas mesmas qualidades — adaptabilidade, escalabilidade e segurança — são mais críticas do que nunca. O banco de dados que uma empresa escolhe será uma das decisões mais estratégicas que determinarão o sucesso de suas iniciativas de IA.</p>
<p>Os aplicativos de IA generativa já começaram a oferecer ganhos de produtividade, escrever código, elaborar documentos e responder a perguntas. Mas a verdadeira transformação está por vir com a <a href="https://www.mongodb.com/resources/basics/artificial-intelligence/ai-agents">IA agêntica</a> — aplicativos que percebem, decidem e agem. Esses agentes inteligentes não apenas seguem fluxos de trabalho; eles buscam resultados, raciocinando sobre as melhores etapas para alcançá-los. E nesse loop, o banco de dados é indispensável. Ela fornece a memória que permite aos agentes perceberem o contexto, os fatos que lhes permitem decidir de forma inteligente e o estado que lhes permitirá agir de forma coerente.</p>
<p>É por isso que os dados de uma empresa são seu ativo mais valioso. LLM (<a href="https://www.mongodb.com/resources/basics/artificial-intelligence/large-language-models">LLMs</a>) podem gerar respostas, mas é o banco de dados que fornece continuidade, colaboração e verdadeira inteligência. O futuro da IA não é apenas sobre raciocínio—é sobre contexto, memória e o poder dos seus dados.</p>
<h2>O banco de dados ideal para IA transformativa</h2>
<p>Então, como é o banco de dados ideal para IA agentic? Deve refletir a complexidade de hoje e as mudanças de amanhã. Deve falar a linguagem da IA, que é cada vez mais JSON. Ele deve integrar a recuperação avançada de dados brutos, metadados e embeddings — não apenas a correspondência exata, mas também o significado e a intenção. Ele deve conectar dados privados e LLM com incorporações e reclassificadores da mais alta qualidade. E deve fornecer o desempenho, a escalabilidade e a segurança necessários para suportar aplicativos de missão crítica em um dimensionamento global.</p>
<p>Isso é precisamente o que o MongoDB entrega. Nós não apenas marcamos as caixas desta lista — nós as definimos.</p>
<h2>Estamos apenas dando os primeiros passos</h2>
<p>É por isso que estou tão otimista quanto ao nosso futuro. A energia e a criatividade que vemos em todos os eventos do MongoDB.local me lembram da paixão que sempre impulsionou esta empresa. À medida que nossos clientes continuam a inovar, sei que o MongoDB está na posição ideal para ajudá-los a prosperar na era da IA.</p>
<p>Mal podemos esperar para ver suas próximas criações.</p>
<div class="callout">
<p><b>Para ver mais anúncios e as atualizações mais recentes do produto, visite nossa página <a href="http://www.mongodb.com/new">Novidades</a>. E vá para o <a href="https://www.mongodb.com/events/mongodb-local">hub MongoDB.local</a> para ver onde estaremos em seguida.</b></p>
</div>	]]></description>
      <pubDate>Thu, 18 Sep 2025 12:00:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/events/local-nyc-2025-defining-ideal-database-for-ai-era-br</link>
      <guid>https://www.mongodb.com/company/blog/events/local-nyc-2025-defining-ideal-database-for-ai-era-br</guid>
    </item><item>
      <title>MongoDB.local NYC 2025: AI 시대를 위한 이상적인 데이터베이스 정의</title>
      <description><![CDATA[<p>어제, 글로벌 .local 시리즈의 최신 목적지인 MongoDB.local NYC에서 수천 명의 개발자와 경영진을 맞이했습니다. 지난 1년 동안 저희는 전 세계 20개 도시의 수만 명의 제휴하다와 고객과 소통했습니다. 하지만 MongoDB 본사가 설립되었고 지금도 본사가 있는 뉴욕에서 개최하게 되어 더욱 뜻깊습니다.</p>
<p>이번 이벤트에서는 세계 최고의 최신 데이터베이스로서 MongoDB의 입지를 더욱 강화하는 새로운 역량을 소개했습니다. MongoDB 8.2는 역대 가장 기능이 풍부하고 성능이 뛰어난 릴리스로, 개발자가 달성할 수 있는 목표의 기준을 한층 더 높이고 있습니다. 또한 신뢰할 수 있고 안정적인 AI 애플리케이션 구축에 최첨단 정확성과 효율성을 제공하는 <a href="https://www.mongodb.com/products/platform/ai-search-and-retrieval">Voyage AI</a> 임베딩 모델과 리랭커에 대한 자세한 내용도 공유했습니다. 또한 검색 및 벡터 검색이 이제 <a href="https://www.mongodb.com/company/blog/product-release-announcements/supercharge-self-managed-apps-search-vector-search-capabilities">MongoDB Community Edition과 Enterprise Server에 대한 공개 프리뷰로 제공됨에 따라 고객이 원하는 운영 환경 어디에서나 강력한 검색 역량을 직접 활용할 수 있게 되었습니다</a>.</p>
<p>저는 특히 MongoDB 애플리케이션 현대화 플랫폼(<a href="https://www.mongodb.com/company/blog/product-release-announcements/amp-ai-driven-approach-modernization">AMP</a>)의 출시에 대해 매우 기대하고 있습니다. 모든 엔터프라이즈는 AI 의 요구 사항을 지원할 수 없는 레거시 시스템의 막대한 비용으로 인해 어려움을 겪고 있습니다. AMP는 단순한 &quot;리프트 앤 시프트&quot;가 아닙니다. AI 기반 도구, 검증된 기술, 전문 인력을 결합하여 비용과 위험을 최소화하면서 중요한 비즈니스 시스템을 재창조하는 반복 가능한 엔드투엔드 플랫폼입니다. 초기 결과는 인상적입니다. 구형 시스템에서 MongoDB로 전환하는 엔터프라이즈는 2~3배 더 빠르게 전환하고 있으며 코드 재작성 같은 작업의 속도가 엄청나게 빨라지고 있습니다.</p>
<center><caption>그림 1. MongoDB.local NYC 기조연설.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-09-17 at 4.12.30 PM-000u0klcze.png" alt="Photo of the stage for the .local NYC keynote" title=" " style="width: 800px"/>
</div> 
	<figcaption class="fl-center"><a href="https://www.youtube.com/live/LE8hUTNd2Go?si=GcdvxVoX1w3DRCcg">YouTube</a>에서 기조연설 전문을 시청하세요.</figcaption>
</figure>
<h2>세계에서 가장 인기 있는 현대 데이터베이스가 되다</h2>
<p>MongoDB의 여정을 되돌아보면 저희가 얼마나 멀리 왔는지 놀라움을 금할 수 없습니다. 제가 10여 년 전에 입사했을 때는 고객이 수천 명에 불과했습니다. 오늘날 MongoDB는 Fortune 100대 기업 중 70% 이상과 최첨단 AI 네이티브 스타트업을 포함해 모든 산업과 업종에 걸쳐 거의 60,000개 조직에 서비스를 제공하고 있습니다.</p>
<p>하지만 성장의 원동력은 여전히 변함이 없습니다. 1970년대에 구축된 관계형 데이터베이스는 현대 애플리케이션의 규모와 복잡성에 맞게 설계되지 않았습니다. 경직되어 있고 확장하기 어려우며 적응 속도가 느렸습니다. DoubleClick을 구축하면서 이러한 한계를 직접 경험한 창립자들은 현대 세계의 현실에 맞춰 설계한 더욱 개선된 데이터베이스 모델을 설정했습니다. 바로 문서 모델이 탄생한 것입니다.</p>
<p>JSON을 기반으로 하는 문서 모델은 직관적이고 유연하며 강력합니다. 이를 통해 개발자는 복잡하고 상호 의존적이며 끊임없이 변화하는 데이터를 자연스럽게 표현할 수 있습니다. 그리고 AI 시대에 접어들면서 이러한 적응성, 확장성, 보안과 같은 자질은 그 어느 때보다 중요해졌습니다. 기업이 선택하는 데이터베이스는 AI 이니셔티브의 성공을 결정하는 가장 전략적인 결정 중 하나가 될 것입니다.</p>
<p>생성형 인공지능 애플리케이션은 이미 생산성 향상, 코드 작성, 문서 초안 작성, 질문에 대한 답변 등을 제공하기 시작했습니다. 하지만 진정한 변화는 인지하고, 결정하고, 행동하는 <a href="https://www.mongodb.com/resources/basics/artificial-intelligence/ai-agents">에이전트 AI</a> 애플리케이션에 있습니다. 이러한 지능형 에이전트는 단순히 워크플로를 따르는 것이 아니라 결과를 추구하고 이를 달성하기 위한 최선의 단계를 추론합니다. 이 과정에서 데이터베이스는 필수 불가결한 요소입니다. 에이전트가 컨텍스트를 인식할 수 있는 메모리, 지능적으로 판단할 수 있는 사실, 일관성 있게 행동할 수 있는 상태를 제공합니다.</p>
<p>그렇기 때문에 기업의 데이터는 가장 소중한 자산입니다. 거대 언어 모델(<a href="https://www.mongodb.com/resources/basics/artificial-intelligence/large-language-models">LLM</a>)은 응답을 생성할 수 있지만, 연속성, 협업 및 진정한 인텔리전스를 제공하는 것은 데이터베이스입니다. AI의 미래는 단순히 추론에 관한 것이 아닙니다. 그것은 컨텍스트, 메모리, 그리고 데이터의 힘에 관한 것입니다.</p>
<h2>혁신적인 AI를 위한 이상적인 데이터베이스</h2>
<p>그렇다면 에이전트 AI에 이상적인 데이터베이스는 어떤 모습일까요? 오늘날의 복잡성과 미래의 변화를 반영해야 합니다. AI 언어인 JSON을 사용하는 경우가 늘어나고 있습니다. 원시 데이터, 메타데이터, 임베딩에 걸쳐 정확한 검색뿐 아니라 의미와 의도까지 통합하는 고급 검색 기능을 제공해야 합니다. 개인 데이터와 LLM을 최고 품질의 임베딩 및 리랭커와 연결해야 합니다. 또한 전 세계적으로 미션 크리티컬 애플리케이션을 지원하는 데 필요한 성능, 확장성 및 보안을 제공해야 합니다.</p>
<p>이것이 바로 MongoDB가 제공하는 것입니다. 저희는 이 목록의 항목을 단순히 체크하는 것이 아니라, 직접 정의합니다.</p>
<h2>이제 시작에 불과합니다</h2>
<p>이것이 제가 우리의 미래에 대해 매우 낙관적인 이유입니다. 모든 MongoDB.local 이벤트에서 볼 수 있는 에너지와 창의성은 항상 이 회사의 원동력이 되어 온 열정을 떠올리게 합니다. 고객이 혁신을 거듭하는 가운데, 저는 MongoDB가 AI 시대에서 고객의 성공을 도울 수 있는 완벽한 위치에 있다고 확신합니다.</p>
<p>여러분이 다음 어떤 제품을 빌드할지 정말 기대됩니다.</p>
<div class="callout">
<p>더 많은 공지 사항을 확인하고 최신 제품 업데이트를 받으려면 새로운 기능 페<a href="http://www.mongodb.com/new">이지</a>를 방문하세요. <a href="https://www.mongodb.com/events/mongodb-local">MongoDB.local</a> 허브로 이동하여 다음 행사가 진행되는 도시를 확인해 보세요.</p>
</div>	]]></description>
      <pubDate>Thu, 18 Sep 2025 12:00:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/events/local-nyc-2025-defining-ideal-database-for-ai-era-kr</link>
      <guid>https://www.mongodb.com/company/blog/events/local-nyc-2025-defining-ideal-database-for-ai-era-kr</guid>
    </item><item>
      <title>MongoDB.local NYC 2025: Definition der idealen Datenbank für das KI-Zeitalter</title>
      <description><![CDATA[<p>Gestern haben wir Tausende von Entwicklern und Führungskräften bei MongoDB.local NYC begrüßt, der jüngsten Station unserer globalen .local -Reihe. Im vergangenen Jahr haben wir Kontakte zu Zehntausenden von Partnern und Kunden in 20 Städten weltweit geknüpft. Aber es ist besonders bedeutsam, in New York zu sein – wo MongoDB gegründet wurde und wo wir immer noch unseren Hauptsitz haben.</p>
<p>Während der Veranstaltung haben wir neue Funktionen vorgestellt, die die Position von MongoDB als weltweit führende moderne Datenbank weiter stärken. Mit MongoDB 8.2, unserer bisher funktionsreichsten und leistungsstärksten Version, setzen wir neue Maßstäbe für das, was Entwickler erreichen können. Wir haben auch mehr über unsere <a href="https://www.mongodb.com/products/platform/ai-search-and-retrieval">Voyage-AI</a>-Einbettungsmodelle und Reranker erzählt, die für höchste Genauigkeit und Effizienz bei der Entwicklung vertrauenswürdiger, zuverlässiger KI-Anwendungen sorgen. Und da <a href="https://www.mongodb.com/company/blog/product-release-announcements/supercharge-self-managed-apps-search-vector-search-capabilities">Search und Vector Search jetzt sowohl für die MongoDB Community Edition als auch für den Enterprise Server</a> in der öffentlichen Vorschauversion verfügbar sind, stellen wir leistungsstarke Abruffunktionen direkt in die Kundenumgebungen zur Verfügung – wo auch immer sie bevorzugt eingesetzt werden.</p>
<p>Ich freue mich besonders über die <a href="https://www.mongodb.com/company/blog/product-release-announcements/amp-ai-driven-approach-modernization">Einführung der MongoDB-Plattform für die Modernisierung von Anwendungen</a> (AMP). Unternehmen auf der ganzen Welt kämpfen mit den enormen Kosten von Legacy-Systemen, die den Anforderungen der KI nicht gerecht werden können. AMP ist kein einfaches „Lift-and-Shift“. Es handelt sich um eine wiederholbare, durchgängige Plattform, die KI-gestützte Tools, bewährte Techniken und spezialisierte Talente kombiniert, um kritische Geschäftssysteme neu zu gestalten und gleichzeitig Kosten und Risiken zu minimieren. Die ersten Ergebnisse sind beeindruckend: Unternehmen, die von alten Systemen auf MongoDB umsteigen, tun dies zwei- bis dreimal schneller, und Aufgaben wie das Umschreiben von Code beschleunigen sich um ein Vielfaches.</p>
<center><caption><b>Abbildung 1:</b> Keynote MongoDB.local NYC.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-09-17 at 4.12.30 PM-000u0klcze.png" alt="Photo of the stage for the .local NYC keynote" title=" " style="width: 800px"/>
</div> 
	<figcaption class="fl-center">Sehen Sie sich die ganze&nbsp;<a href="https://www.youtube.com/live/LE8hUTNd2Go?si=GcdvxVoX1w3DRCcg">Keynote</a>&nbsp;auf YouTube an.</figcaption>
</figure>
<h2>Entwicklung zur weltweit beliebtesten modernen Datenbank</h2>
<p>Wenn ich über die Reise von MongoDB nachdenke, fällt mir auf, wie weit wir gekommen sind. Als ich vor etwas mehr als einem Jahrzehnt dazukam, hatten wir nur ein paar tausend Kunden. Heute betreut MongoDB fast 60.000 Organisationen aus allen Branchen und Sektoren, darunter mehr als 70 % der Fortune-100-Unternehmen und innovative KI-native Startups.</p>
<p>Doch der Grund für unser Wachstum bleibt derselbe. Relationale Datenbanken, die in den 1970er Jahren entwickelt wurden, waren nie für die Skalierung und Komplexität moderner Anwendungen ausgelegt. Sie waren starr, schwer zu skalieren und ließen sich nur langsam anpassen. Unsere Gründer, die diese Einschränkungen beim Aufbau von DoubleClick aus erster Hand erlebt hatten, machten sich daran, etwas Besseres zu schaffen: ein Datenbankmodell, das auf die Realitäten der modernen Welt zugeschnitten ist. Das Dokumentmodell war geboren.</p>
<p>Das auf JSON basierende <a href="https://www.mongodb.com/resources/basics/databases/document-databases">Dokumentmodell</a> ist intuitiv, flexibel und leistungsstark. Es ermöglicht Entwicklern, komplexe, voneinander abhängige und sich ständig ändernde Daten auf natürliche Weise abzubilden. Und mit dem Eintritt in das Zeitalter der KI sind genau diese Eigenschaften – Anpassungsfähigkeit, Skalierbarkeit und Sicherheit – wichtiger denn je. Die Wahl der Datenbank durch ein Unternehmen ist eine der strategischsten Entscheidungen, die den Erfolg seiner KI-Initiativen bestimmen.</p>
<p>Anwendungen der generativen KI haben bereits Produktivitätssteigerungen gebracht, indem sie Code schreiben, Dokumente entwerfen und Fragen beantworten. Doch die eigentliche Transformation steht uns mit der <a href="https://www.mongodb.com/resources/basics/artificial-intelligence/ai-agents">agentenbasierten KI</a> bevor – Anwendungen, die wahrnehmen, entscheiden und handeln. Diese intelligenten Agenten befolgen nicht einfach nur Arbeitsabläufe, sondern streben nach Ergebnissen und überlegen, welche Schritte am besten geeignet sind, um diese zu erreichen. Und in diesem Kreislauf ist die Datenbank unverzichtbar. Sie liefert das Gedächtnis, das es den Agenten ermöglicht, Zusammenhänge wahrzunehmen, die Fakten, die es ihnen erlauben, intelligente Entscheidungen zu treffen, und den Zustand, der sie in die Lage versetzt, kohärent zu handeln.</p>
<p>Deshalb sind die Daten eines Unternehmens sein wertvollstes Gut. <a href="https://www.mongodb.com/resources/basics/artificial-intelligence/large-language-models">Large Language Models</a> (LLMs) können Antworten generieren, aber es ist die Datenbank, die Kontinuität, Zusammenarbeit und wahre Intelligenz bietet. Die Zukunft der KI liegt nicht nur im logischen Denken, sondern auch im Kontext, im Gedächtnis und in der Macht Ihrer Daten.</p>
<h2>Die ideale Datenbank für transformative KI</h2>
<p>Wie sieht also die ideale Datenbank für agentenbasierte KI aus? Sie muss die Komplexität von heute und den Wandel von morgen widerspiegeln. Sie muss die Sprache der KI sprechen, die zunehmend JSON ist. Sie muss eine fortschrittliche Abfrage von Rohdaten, Metadaten und Einbettungen integrieren – nicht nur einen exakten Abgleich, sondern auch Bedeutung und Absicht. Sie muss private Daten und Large Language Models mit Einbettungen und Rerankern höchster Qualität verbinden. Und sie muss die Leistung, Skalierbarkeit und Sicherheit bieten, die für den Betrieb von unternehmenskritischen Anwendungen auf globaler Skala erforderlich sind.</p>
<p>Genau das bietet MongoDB. Wir haken nicht einfach die Punkte auf dieser Liste ab – wir definieren sie.</p>
<h2>Wir stehen noch ganz am Anfang</h2>
<p>Deshalb bin ich für unsere Zukunft so optimistisch. Die Energie und Kreativität, die wir bei jedem MongoDB.local-Event sehen, erinnern mich an die Leidenschaft, die dieses Unternehmen schon immer angetrieben hat. Da unsere Kunden weiterhin innovativ sind, weiß ich, dass MongoDB in der perfekten Position ist, um ihnen im Zeitalter der KI zum Erfolg zu verhelfen.</p>
<p>Wir können es kaum erwarten zu sehen, was Sie als Nächstes entwickeln.</p>
<div class="callout">
<p><b>Weitere Ankündigungen und die neuesten Produktaktualisierungen finden Sie auf unserer Seite „<a href="http://www.mongodb.com/new">What's New</a>“. Und besuchen Sie den Hub von <a href="https://www.mongodb.com/events/mongodb-local">MongoDB.local</a>, um zu sehen, wo wir als nächstes Halt machen werden.</b></p>
</div>	]]></description>
      <pubDate>Thu, 18 Sep 2025 12:00:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/events/local-nyc-2025-defining-ideal-database-for-ai-era-de</link>
      <guid>https://www.mongodb.com/company/blog/events/local-nyc-2025-defining-ideal-database-for-ai-era-de</guid>
    </item><item>
      <title>MongoDB.local NYC 2025: Definire il database ideale per l&#39;era dell&#39;AI</title>
      <description><![CDATA[<p>Ieri abbiamo accolto migliaia di sviluppatori e dirigenti a MongoDB.local NYC, l'ultima tappa del nostro tour globale della serie .local . Nel corso dell'ultimo anno, ci siamo connessi con decine di migliaia di partner e clienti in 20 città in tutto il mondo. Ma è particolarmente significativo essere a New York, dove MongoDB è stata fondata e dove abbiamo ancora la sede centrale.</p>
<p>Durante l'evento, abbiamo introdotto nuove funzionalità che rafforzano la posizione di MongoDB come il principale database moderno a livello mondiale. Con MongoDB 8.2, la nostra versione più ricca di funzionalità e prestazioni, stiamo alzando il livello di ciò che gli sviluppatori possono ottenere. Abbiamo anche condiviso ulteriori informazioni sui nostri modelli di embedding e reranker di <a href="https://www.mongodb.com/products/platform/ai-search-and-retrieval">Voyage AI</a>, che offrono accuratezza ed efficienza all'avanguardia per la creazione di applicazioni AI affidabili e sicure. E con <a href="https://www.mongodb.com/company/blog/product-release-announcements/supercharge-self-managed-apps-search-vector-search-capabilities">Search e Vector Search ora disponibili in public preview sia per MongoDB Community Edition che per Enterprise Server</a>, stiamo inserendo potenti funzionalità di recupero direttamente negli ambienti dei clienti, ovunque preferiscano operare.</p>
<p>Sono particolarmente entusiasta del lancio della <a href="https://www.mongodb.com/company/blog/product-release-announcements/amp-ai-driven-approach-modernization">MongoDB Application Modernization Platform</a>, o AMP. Le aziende di tutto il mondo sono alle prese con gli enormi costi dei sistemi legacy che non sono in grado di supportare le esigenze dell'AI. L'AMP non è un semplice &quot;lift-and-shift&quot;. Si tratta di una piattaforma end-to-end ripetibile che combina strumenti basati sull'AI, tecniche collaudate e talenti specializzati per reinventare i sistemi aziendali critici riducendo al minimo i costi e i rischi. I primi risultati sono impressionanti: le aziende che passano dai vecchi sistemi a MongoDB lo fanno da due a tre volte più velocemente, e attività come la riscrittura del codice stanno accelerando di un ordine di grandezza.</p>
<center><caption><b>Figura 1.</b> Discorso principale al MongoDB.local New York.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-09-17 at 4.12.30 PM-000u0klcze.png" alt="Photo of the stage for the .local NYC keynote" title=" " style="width: 800px"/>
</div> 
	<figcaption class="fl-center">Guarda il discorso principale completo su&nbsp;<a href="https://www.youtube.com/live/LE8hUTNd2Go?si=GcdvxVoX1w3DRCcg">YouTube</a>. </figcaption>
</figure>
<h2>Diventare il database moderno più popolare al mondo</h2>
<p>Quando rifletto sul percorso di MongoDB, sono colpito da quanto lontano siamo arrivati. Quando sono entrato a far parte dell'azienda poco più di dieci anni fa, avevamo solo poche migliaia di clienti. Oggi, MongoDB serve quasi 60.000 organizzazioni in ogni settore e ambito industriale, inclusi oltre il 70% delle aziende Fortune 100 e startup all'avanguardia native dell'AI.</p>
<p>Tuttavia, il motivo alla base della nostra crescita rimane invariato. I relational database creati negli anni '70 non sono stati progettati per la scalabilità e la complessità delle applicazioni moderne. Erano rigidi, difficili da scalare e lenti ad adattarsi. I nostri fondatori, che avevano vissuto in prima persona questi limiti durante la creazione di DoubleClick, si sono messi all'opera per creare qualcosa di meglio: un modello di database progettato per le realtà del mondo moderno. Così è nato il document model.</p>
<p>Basato su JSON, il <a href="https://www.mongodb.com/resources/basics/databases/document-databases">document model</a> è intuitivo, flessibile e potente. Permette agli sviluppatori di rappresentare dati complessi, interdipendenti e in continua evoluzione in modo naturale. E, mentre entriamo nell'era dell'AI, quelle stesse qualità, ovvero adattabilità, scalabilità e sicurezza, sono più importanti che mai. Il database scelto da un'azienda sarà una delle decisioni più strategiche che determinerà il successo delle sue iniziative legate all'AI.</p>
<p>Le applicazioni di Generative AI hanno già iniziato a migliorare la produttività, a scrivere codice, a redigere documenti e a rispondere alle domande. Ma la vera trasformazione sta <a href="https://www.mongodb.com/resources/basics/artificial-intelligence/ai-agents">nell'AI agentica</a>: applicazioni che percepiscono, decidono e agiscono. Questi agenti intelligenti non seguono solo i flussi di lavoro: perseguono i risultati, ragionando sui passi migliori per raggiungerli. E in questo ciclo, il database è indispensabile. Fornisce la memoria che consente agli agenti di percepire il contesto, i fatti che consentono loro di decidere in modo intelligente e lo stato che consentirà loro di agire in modo coerente.</p>
<p>Ecco perché i dati di un'azienda sono il suo bene più prezioso. I  <a href="https://www.mongodb.com/resources/basics/artificial-intelligence/large-language-models">modelli linguistici di grandi dimensioni</a> (LLM) possono generare risposte, ma è il database che fornisce continuità, collaborazione e vera intelligenza. Il futuro dell'AI non riguarda solo il ragionamento, ma anche il contesto, la memoria e la potenza dei propri dati.</p>
<h2>Il database ideale per l'AI trasformativa</h2>
<p>Quindi, qual è il database ideale per l'AI agentica? Deve riflettere la complessità di oggi e il cambiamento di domani. Deve parlare il linguaggio dell'AI, che è sempre più il JSON. Deve integrare il recupero avanzato dei dati grezzi, dei metadati e degli embedding, non solo la corrispondenza esatta, ma anche il significato e l'intento. Deve collegare dati privati e LLM con embedding e reranker della massima qualità. E deve fornire le prestazioni, la scalabilità e la sicurezza necessarie per alimentare le applicazioni mission-critical su scala globale.</p>
<p>Questo è esattamente ciò che MongoDB offre. Non ci limitiamo a spuntare le caselle di questa lista, le definiamo noi.</p>
<h2>Abbiamo appena iniziato</h2>
<p>Ecco perché sono così ottimista riguardo al nostro futuro. L'energia e la creatività che vediamo in ogni evento di MongoDB.local mi ricordano la passione che ha sempre alimentato questa azienda. Mentre i nostri clienti continuano a innovare, so che MongoDB è nella posizione perfetta per aiutarli ad avere successo nell'era dell'AI.</p>
<p>Non vediamo l'ora di scoprire cosa svilupperai in futuro.</p>
<div class="callout">
<p><b>Per visualizzare ulteriori annunci e ricevere gli ultimi aggiornamenti sui prodotti, visita la nostra pagina <a href="http://www.mongodb.com/new">Novità</a>. Vai <a href="https://www.mongodb.com/events/mongodb-local">all'hub MongoDB.local</a> per scoprire dove saremo prossimamente.	</b></p>
</div>	]]></description>
      <pubDate>Thu, 18 Sep 2025 12:00:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/events/local-nyc-2025-defining-ideal-database-for-ai-era-it</link>
      <guid>https://www.mongodb.com/company/blog/events/local-nyc-2025-defining-ideal-database-for-ai-era-it</guid>
    </item><item>
      <title>Celebrating Excellence: MongoDB Global Partner Awards 2025</title>
      <description><![CDATA[<p>In a world being reshaped by AI and rapid technological change, one thing is clear: our partners are powering the future with MongoDB. Together, we help customers modernize legacy systems, solve challenges from security to budget constraints, and build the next wave of AI-powered applications.</p>
<p>That’s why we’re proud to announce the annual MongoDB Global Partner Awards — celebrating partners who led the way in 2025. From pioneering AI and modernization to advancing public sector innovation to building bold go-to-market collaborations, these partners set the standard for excellence. Their leadership doesn’t just move the needle — it redefines what’s possible.</p>
<div class="callout">
<p><b>This post is also available in: <a href="https://www.mongodb.com/company/blog/news/celebrating-excellence-mongodb-global-partner-awards-2025-de" target="_blank">Deutsch</a>, <a href="https://www.mongodb.com/company/blog/news/celebrating-excellence-mongodb-global-partner-awards-2025-fr" target="_blank">Français</a>, <a href="https://www.mongodb.com/company/blog/news/celebrating-excellence-mongodb-global-partner-awards-2025-es" target="_blank">Español</a>, <a href="https://www.mongodb.com/company/blog/news/celebrating-excellence-mongodb-global-partner-awards-2025-br" target="_blank">Português</a>, <a href="https://www.mongodb.com/company/blog/news/celebrating-excellence-mongodb-global-partner-awards-2025-it" target="_blank">Italiano</a>, <a href="https://www.mongodb.com/company/blog/news/celebrating-excellence-mongodb-global-partner-awards-2025-kr" target="_blank">한국어</a>, <a href="https://www.mongodb.com/company/blog/news/celebrating-excellence-mongodb-global-partner-awards-2025-cn" target="_blank">简体中文</a>.</b></p>
</div>	
<h2>Global Cloud Partner of the Year: Microsoft</h2>
<p>We are proud to recognize <a href="https://cloud.mongodb.com/ecosystem/microsoft-power-platform-mongodb-connector">Microsoft</a> for exceptional year-over-year growth as MongoDB’s Global Cloud Partner of the Year. Together, MongoDB and Microsoft have delivered strong momentum across industries such as healthcare, telecommunications, and financial services, helping organizations build great applications that deliver exceptional customer experiences.</p>
<p>Microsoft’s deep commitment to collaboration, customer success, and cloud leadership makes it an indispensable part of MongoDB’s partner ecosystem. The strength of the partnership continues to grow; in fact, MongoDB was recently selected as a Microsoft partner for a “Unify your data solution play,” which enables customers to benefit from the joint integrations and go-to-market (GTM) resources between <a href="https://www.mongodb.com/products/platform/atlas-database">MongoDB Atlas</a> on Azure and Native Microsoft services.</p>
<h2>Global AI Cloud Partner of the Year: Amazon Web Services (AWS)</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/amazon-web-services">AWS</a> has been a driving force in helping customers unlock the full potential of AI with MongoDB, highlighted by our work with <a href="https://www.mongodb.com/solutions/customer-case-studies/novo-nordisk">Novo Nordisk</a>, who leveraged Amazon Bedrock and MongoDB Atlas to build an AI solution that cut one of their most time-intensive workflows from 12 weeks to 10 minutes. The work with Novo Nordisk is just one example of many that showcases the power of our partnership to create business differentiation for customers in the gen AI era.</p>
<p>MongoDB was also a generative AI Competency launch partner for AWS, further tightening our collaboration in AI. From breakthrough generative AI use cases and beyond, our partnership empowers organizations to move faster, innovate more boldly, and transform with confidence. Together, AWS and MongoDB are shaping what’s possible in the AI era.</p>
<h2>Global Cloud GTM Partner of the Year: Google Cloud</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/google-cloud">Google Cloud</a> is being honored for accelerating new business through impactful joint GTM initiatives. MongoDB's partnership with Google Cloud has set the standard for meaningful collaboration—driving new business and delivering impact across some of the world’s most complex global enterprises. The joint Google Cloud and MongoDB Sales Development Representative program has been the cornerstone of this success, ensuring early-stage talent get the opportunity to work with the largest organisations in the world whilst learning a sales playbook that will serve them well for the rest of their career. Google Cloud continues to be a driving force in MongoDB’s global growth thanks to its joint commitment to innovative GTM strategies.</p>
<h2>Global Systems Integrator Partner of the Year: Accenture</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/accenture-mainframe-modernization">Accenture</a> has demonstrated exceptional commitment as a Global SI Partner, establishing a dedicated center of excellence for MongoDB within its software engineering service line.</p>
<p>Together, MongoDB and Accenture have delivered transformative customer outcomes across industries, from payment modernization for a leading bank to data transformation for a major manufacturer. Meanwhile, closer collaboration with Accenture’s BFSI business unit has continued to fuel global customer success. By combining MongoDB’s modern database platform with Accenture’s deep industry expertise, our partnership continues to help customers modernize, unlock data-driven insights, and accelerate digital transformation at enterprise scale.</p>
<h2>Global Public Sector Partner of the Year: Accenture Federal Services</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/accenture-federal-services">Accenture Federal Services</a> has played a pivotal role in advancing MongoDB’s presence in the public sector. Thanks to its scale, expertise, and focus on customer outcomes, it has driven remarkable year-over-year growth and has supported critical government missions in coordination with MongoDB.</p>
<p>MongoDB and Accenture Federal Services are helping government agencies meet their efficiency goals by modernizing legacy applications, seamlessly consolidating platforms, and streamlining architectures, all while reducing costs. We are excited to have Accenture Federal Services as a key sponsor of our inaugural MongoDB Public Sector Summit in January 2026.</p>
<h2>Global Tech Partner of the Year: Confluent</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/confluent">Confluent</a>—the data streaming platform built by the co-creators of Apache Kafka®—continues to be a strategic partner with more than 550 joint customer deployments delivering impact across industries worldwide. Over the past year, MongoDB and Confluent have strengthened global go-to-market (GTM) alignment, focusing acceleration of co-sell engagement across EMEA and APAC.</p>
<p>Together, MongoDB and Confluent have delivered gen AI quickstarts, no-code streaming demos, and co-authored agentic AI thought leadership to help customers accelerate innovation with data in motion and build event-driven AI applications. Our partnership is anchored in strong field collaboration, with ongoing co-sponsored AI workshops and hands-on developer events. A standout highlight of our GTM collaboration was a joint gen AI Developer Day with Confluent and LangChain, where AI leaders engaged 80+ developers to showcase how our combined platforms enable cost-effective, explainable, and personalized multi-agent systems.</p>
<h2>Global ISV Partner of the Year: BigID</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/bigid-data-intelligence-platform">BigID</a> has remained a standout ISV partner for MongoDB, consistently delivering strong results for customers across financial services, insurance, and healthcare. Together, we have launched impactful joint GTM initiatives, from customer events to tailored incentive programs that have accelerated growth opportunities. BigID continues to be recognized as a leader in data security, privacy, and AI data management, and thanks to our close global alignment, is further strengthening MongoDB’s position as a trusted partner for organizations operating in highly regulated industries.</p>
<h2>Global AI Tech Partner of the Year: LangChain</h2>
<p>MongoDB’s partnership with <a href="https://cloud.mongodb.com/ecosystem/langchain">LangChain</a> has unlocked powerful new integrations that make it easier for developers to build <a href="https://www.mongodb.com/resources/basics/artificial-intelligence/retrieval-augmented-generation">retrieval-augmented generation</a> (RAG) applications and intelligent agents on MongoDB.</p>
<p>From hybrid search and parent document retrievers to short- and long-term memory capabilities, these joint solutions are helping developers push the boundaries of what’s possible with AI. Through joint workshops, webinars, and hands-on training, we have equipped developers with the tools and knowledge to adopt these capabilities at scale. Momentum continues to build rapidly, and adoption of both the LangChain/MongoDB and LangGraph/MongoDB packages continues to grow, highlighting the strength of our collaboration and the thriving developer ecosystem that MongoDB and LangChain are enabling together.</p>
<h2>Global AI SI Partner of the Year: Pureinsights</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/pureinsights">Pureinsights</a> accelerates intelligent search and AI application development with its powerful Discovery Platform. A standout capability is its integration with <a href="https://www.mongodb.com/products/platform/ai-search-and-retrieval">Voyage AI by MongoDB</a>, delivering advanced embeddings, multimodal embedding, and result reranking, earning recognition for its strong proof point track record and differentiated value in enterprise-grade use cases. With a focus on implementing generative AI, vector search, and RAG use cases, Pureinsights continues to empower clients to innovate quickly, reliably, and at scale.</p>
<h2>Global Modernization Partner of the Year: gravity9</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/gravity9">gravity9</a> has established itself as a trusted MongoDB partner by delivering consistent impact through modernization and jumpstart projects across industries and geographies, powered by AI. As a strategic implementation partner, gravity9 specializes in designing and delivering cloud-native, scalable solutions that help organizations modernize legacy systems, adopt new technologies, accelerate time-to-value, and prepare for the AI era. By combining deep technical expertise with an agile delivery model, gravity9 enables customers to unlock transformation opportunities, whether moving workloads to the cloud, building new AI experiences, or optimizing existing infrastructure. gravity9’s close collaboration with MongoDB’s Professional Services teams has generated consistently high customer ratings, demonstrating the quality and reliability of their work.</p>
<h2>Global Impact Partner of the Year: IBM</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/mongodb-on-ibm-z-and-linuxone">IBM</a> is being recognized for the Impact Partner of the Year award for their strategic contributions across a variety of large, industry-leading clients. IBM has played a critical role in securing large contracts with several multinational financial institutions and is investing more in expanding the partnership globally. The partnership continues to grow, including with Atlas &amp; Watsonx.ai, and increasing numbers of differentiated projects on the IBM Z Systems or LinuxOne infrastructure. IBM is a trusted vendor for large Enterprises, and is a strategic partner in over 25% of MongoDB's largest customers.</p>
<h2>Global Cloud - Certified DBaaS Partner of the Year: Alibaba</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/alibaba-cloud">Alibaba Cloud</a> has established itself as a strategic MongoDB partner by driving innovation with ApsaraDB for MongoDB and utilizing AI to help organizations build modern applications. With a strong focus on key verticals such as Gaming, Automotive, Retail, and Fintech, Alibaba Cloud is enabling enterprises to modernize faster and unlock new opportunities across industries. By combining cutting-edge data solutions with a bold global expansion strategy, Alibaba Cloud empowers customers worldwide to accelerate transformation, whether scaling digital platforms, delivering new customer experiences, or optimizing mission-critical workloads.</p>
<h2>Looking ahead</h2>
<p>Congratulations to all of the 2025 Global Partner Award winners! Their commitment to innovation, collaboration, and customer success has—and will have—a lasting impact on organizations worldwide. These awards not only recognize the past year’s achievements, but also underscore MongoDB’s vision for what we, alongside our partners, will build together in the future.</p>
<div class="callout">
<p><b>To learn more about the MongoDB Partner Program, please visit our <a href="https://www.mongodb.com/partners?tck=partner_awards_blog_2025">partners page</a>.</b></p>
</div>	]]></description>
      <pubDate>Thu, 18 Sep 2025 00:59:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/news/celebrating-excellence-mongodb-global-partner-awards-2025</link>
      <guid>https://www.mongodb.com/company/blog/news/celebrating-excellence-mongodb-global-partner-awards-2025</guid>
    </item><item>
      <title>庆祝卓越：MongoDB 全球合作伙伴奖 2025</title>
      <description><![CDATA[<p>在这个被人工智能和迅猛技术变革重塑的世界里，有一件事愈发清晰明确：我们的合作伙伴正借助 MongoDB 驱动未来前行。我们携手助力客户实现遗留系统现代化，应对从安全到预算限制的种种挑战，并共同构建新一代由人工智能驱动的应用程序。</p>
<p>这就是为什么我们自豪地宣布举办年度 MongoDB 全球合作伙伴大奖，表彰在 2025 年引领潮流的合作伙伴。从引领人工智能与现代化浪潮，到推动公共部门创新，再到构建突破性的市场进入合作模式，这些合作伙伴树立了卓越典范的标杆。他们的领导不仅推动了进展，还重新定义了可能性。</p>
<h2>年度全球云合作伙伴：Microsoft</h2>
<p>我们很荣幸将年度全球云合作伙伴奖授予 <a href="https://cloud.mongodb.com/ecosystem/microsoft-power-platform-mongodb-connector">Microsoft</a>，以表彰其作为 MongoDB 合作伙伴实现的卓越同比增长。MongoDB 与 Microsoft 强强联合，已在医疗保健、电信及金融服务等行业形成强劲发展势头，共同助力企业构建出能够提供卓越客户体验的优质应用程序。</p>
<p>Microsoft 在协同合作、客户成功与云技术领导力方面的不懈投入，使其成为 MongoDB 合作伙伴生态中不可或缺的重要力量。双方合作伙伴关系持续深化：MongoDB 近期更获选参与 Microsoft“统一数据解决方案”计划，成为其合作伙伴。该合作使客户能够充分利用 <a href="https://www.mongodb.com/products/platform/atlas-database">MongoDB Atlas</a> 在 Azure 平台与 Microsoft 原生服务之间的深度集成优势及联合市场推广资源。</p>
<h2>年度全球 AI 云合作伙伴：Amazon Web Services (AWS)</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/amazon-web-services">AWS</a> 一直是推动客户充分发挥 MongoDB 人工智能潜力的核心力量，其中的典型案例是我们与 Novo Nordisk 的合作：他们利用 Amazon Bedrock 和 MongoDB Atlas 构建了一套 AI 解决方案，将其中一项最耗时的工作流程从 12 周缩短到 10 分钟。与 Novo Nordisk 的合作仅是众多案例之一，它印证了双方合作伙伴关系在生成式 AI 时代为客户创造业务领先优势的强大能力。</p>
<p>MongoDB 也是 AWS 的生成式人工智能能力启动伙伴，进一步加强了我们在 AI 领域的合作。从突破性的生成式 AI 应用案例到更广泛的场景，我们的合作伙伴关系使组织能够更快速行动、更大胆创新，并自信地实现转型。AWS 和 MongoDB 正在共同塑造 AI 时代的可能性。</p>
<h2>年度全球云市场拓展合作伙伴：Google Cloud</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/google-cloud">Google Cloud</a> 正凭借其通过卓有成效的联合市场拓展计划加速新业务增长而获得此项荣誉。MongoDB 与 Google Cloud 的合作树立了深度协同的新标杆，通过驱动新业务增长，并在全球多家最复杂的跨国企业中持续创造价值，展现了合作关系的实质影响力。Google Cloud 与 MongoDB 联合设立的销售发展代表项目是此项成功的基石，它确保新生代人才在掌握一套能终身受益的销售方法论的同时，更能获得与全球顶尖企业合作的宝贵机会。凭借其对创新市场拓展战略的坚定承诺，Google Cloud 持续成为推动 MongoDB 全球增长的重要力量。</p>
<h2>年度全球系统集成商合作伙伴：Accenture</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/accenture-mainframe-modernization">Accenture</a> 作为全球系统集成合作伙伴展现出卓越承诺，在其软件工程服务线内部设立了专注于 MongoDB 的能力中心。</p>
<p>MongoDB 与 Accenture 强强联手，已为多个行业客户带来变革性成果，无论是助力领先银行的支付系统现代化升级，还是推动大型制造商的数据转型，均取得显著成效。同时，与 Accenture BFSI 业务部的密切合作继续推动全球客户成功。通过将 MongoDB 的现代数据库平台与 Accenture 深厚的行业专业知识相结合，我们的合作伙伴关系继续帮助客户实现现代化、解锁数据驱动的见解，并加速企业级的数字化转型。</p>
<h2>年度全球公共部门合作伙伴：Accenture Federal Services</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/accenture-federal-services">Accenture Federal Services</a> 在拓展 MongoDB 公共业务版图方面发挥了关键作用。凭借其规模优势、专业实力以及对客户成果的专注，Accenture Federal Services 不仅实现了显著的同比增长，更在与 MongoDB 的协同合作中，有力支撑了多项关键的政府使命。</p>
<p>MongoDB 和 Accenture Federal Services 正在帮助政府机构通过现代化旧版应用程序、无缝整合平台和简化架构来实现其效率目标，同时降低成本。我们欣然宣布，Accenture Federal Services 将作为核心赞助商，鼎力支持定于 2026 年 1 月举办的首届 MongoDB 公共部门峰会。</p>
<h2>年度全球技术合作伙伴：Confluent</h2>
<p>由 Apache Kafka® 联合创始人打造的数据流平台<a href="https://cloud.mongodb.com/ecosystem/confluent">Confluent</a>，作为 MongoDB 的战略合作伙伴，持续推动全球业务增长，目前双方已有逾 550 个联合客户部署，正为各行业创造实际价值。过去一年间，MongoDB 与 Confluent 持续深化全球市场战略协同，重点加速欧洲、中东、非洲及亚太地区的联合销售落地。</p>
<p>MongoDB 与 Confluent 共同推出了生成式 AI 快速入门、无代码流式演示，并联合撰写了智能体 AI 思想领导力内容，帮助客户加速基于动态数据的创新，并构建事件驱动的 AI 应用程序。我们的合作以紧密的现场协作为基础，持续举办联合赞助的人工智能研讨会和面向开发者的实践活动。我们市场拓展合作的一个突出亮点是与 Confluent 和 LangChain 联合举办的生成式 AI 开发者日活动，在活动中，人工智能领域的领导者与 80 多名开发者互动，展示了我们的联合平台如何实现具有成本效益、可解释性和个性化的多智能体系统。</p>
<h2>年度全球独立软件供应商合作伙伴：BigID</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/bigid-data-intelligence-platform">BigID</a> 一直是 MongoDB 的杰出独立软件供应商 (ISV) 合作伙伴，在金融服务、保险和医疗保健等领域持续为客户交付卓越成果。我们共同推出了具有影响力的联合市场拓展 (GTM) 举措，从客户活动到定制激励计划，均加速了增长机会。BigID 持续被公认为数据安全、隐私和 AI 数据管理领域的领导者，并且由于我们在全球范围内的紧密协作，进一步强化了 MongoDB 作为高度监管行业组织可信赖合作伙伴的地位。</p>
<h2>年度全球 AI 技术合作伙伴：LangChain</h2>
<p>MongoDB 与 <a href="https://cloud.mongodb.com/ecosystem/langchain">LangChain</a> 的合作开启了强大的全新集成功能，使开发者能够更轻松地在 MongoDB 上构建检索增强生成 (<a href="https://www.mongodb.com/resources/basics/artificial-intelligence/retrieval-augmented-generation">RAG</a>) 应用及智能体。</p>
<p>从混合搜索和父文档检索器到短期与长期记忆功能，这些联合解决方案正在帮助开发者不断拓展人工智能的可能性边界。通过联合举办的研讨会、在线讲座及实践培训，我们为开发者提供了必要的工具与知识，使其能够大规模应用这些技术能力。势头持续迅速增长，LangChain/MongoDB 和 LangGraph/MongoDB 软件包的采用率不断上升，这突显了我们合作的实力以及 MongoDB 与 LangChain 共同推动的蓬勃发展的开发者生态系统。</p>
<h2>年度全球 AI SI 合作伙伴：Pureinsights</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/pureinsights">Pureinsights</a> 利用其强大的 Discovery 平台，加速智能搜索和人工智能应用的开发。其一项突出能力是与 MongoDB 的 <a href="https://www.mongodb.com/products/platform/ai-search-and-retrieval">Voyage AI</a> 集成，提供先进的嵌入、多模态嵌入和结果重排，在企业级应用案例中以强有力的验证记录和差异化价值获得认可。Pureinsights 专注于实施生成式 AI、向量搜索和 RAG 应用场景，持续赋能客户快速、可靠且大规模地创新。</p>
<h2>年度全球现代化合作伙伴：gravity9</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/gravity9">gravity9</a> 通过在各行各业和不同地区开展现代化和快速启动项目，并借助人工智能实现持续影响，确立了其作为 MongoDB 值得信赖合作伙伴的地位。作为战略实施合作伙伴，gravity9 专注于设计和交付云原生、可扩展的解决方案，帮助组织实现传统系统现代化、采用新技术、加快价值实现速度，并为人工智能时代做好准备。通过将深厚的技术专长与敏捷交付模式相结合，gravity9 使客户能够抓住转型机遇，无论是将工作负载迁移到云端、构建新的 AI 体验，还是优化现有基础设施。gravity9 与 MongoDB 专业服务团队的紧密合作持续获得高客户评价，彰显了其工作质量和可靠性。</p>
<h2>年度全球影响力合作伙伴：IBM</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/mongodb-on-ibm-z-and-linuxone">IBM</a> 因其在众多大型行业领先客户中的战略性贡献而获得“年度影响力合作伙伴”奖的认可。IBM 在与多家跨国金融机构签订大型合同方面发挥了关键作用，并正加大投资以在全球范围内拓展合作伙伴关系。双方的合作持续扩大，包括在 Atlas 和 Watsonx.ai 上，以及在 IBM Z 系统或 LinuxOne 基础设施上开展越来越多的差异化项目。IBM 是大型企业值得信赖的供应商，并且是 MongoDB 超过 25% 最大客户的战略合作伙伴。</p>
<h2>年度全球云认证数据库即服务 (DBaaS) 合作伙伴：阿里巴巴</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/alibaba-cloud">阿里云</a>通过推动 ApsaraDB for MongoDB 的创新，并利用人工智能帮助组织构建现代化应用，确立了其作为 MongoDB 战略合作伙伴的地位。阿里云重点关注游戏、汽车、零售和金融科技等关键垂直行业，帮助企业更快实现现代化，并在各个行业中发掘新的机遇。通过将尖端数据解决方案与大胆的全球扩张战略相结合，阿里云赋能全球客户加速转型，无论是扩展数字平台、提供新的客户体验，还是优化关键任务工作负载。</p>
<h2>展望未来</h2>
<p>恭喜所有 2025 年度全球合作伙伴奖获得者！他们对创新、合作和客户成功的承诺，已经并将继续对全球组织产生持久影响。这些奖项不仅表彰过去一年的成就，也强调了 MongoDB 对未来愿景的承诺，即我们将与合作伙伴共同打造的成果。</p>
<div class="callout">
<p>要了解更多关于 MongoDB 合作伙伴计划的信息，请访问我们的<a href="https://www.mongodb.com/partners?tck=partner_awards_blog_2024">合作伙伴页面</a>。</p>
</div>	]]></description>
      <pubDate>Thu, 18 Sep 2025 00:59:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/news/celebrating-excellence-mongodb-global-partner-awards-2025-cn</link>
      <guid>https://www.mongodb.com/company/blog/news/celebrating-excellence-mongodb-global-partner-awards-2025-cn</guid>
    </item><item>
      <title>우수성을 기념하기: 2025년 MongoDB 글로벌 파트너 어워드</title>
      <description><![CDATA[<p>AI 와 빠른 기술 변화로 인해 재편되는 세상에서 한 가지 분명한 것은 우리의 파트너들이 MongoDB로 미래를 이끌고 있다는 점입니다. 우리는 함께 고객이 레거시 시스템을 현대화하고, 보안부터 예산 제약에 이르는 과제를 해결하고, 다음 AI 기반 애플리케이션을 빌드할 수 있도록 지원합니다.</p>
<p>그래서 우리는 2025년을 선도하는 파트너를 기념하는 연례 MongoDB 글로벌 파트너 어워드를 발표하게 되어 자랑스럽습니다. 선구적인 AI 및 현대화부터 공공 부문 혁신 추진, 대담한 시장 진출 협업 구축에 이르기까지 이러한 파트너들은 우수성의 기준을 설정합니다. 그들의 리더십은 단순히 변화를 가져오는 것이 아니라, 무엇이 가능한지를 새롭게 정의합니다.</p>
<h2>올해의 글로벌 클라우드 파트너: Microsoft</h2>
<p>매년 탁월한 성장을 이룬 <a href="https://cloud.mongodb.com/ecosystem/microsoft-power-platform-mongodb-connector">Microsoft</a>를 MongoDB의 올해의 글로벌 클라우드 파트너로 선정하게 되어 자랑스럽게 생각합니다. MongoDB 및 Microsoft는 의료, 통신, 금융 서비스 등의 산업 전반에 걸쳐 강력한 추진력을 제공함으로써 조직이 탁월한 고객 경험을 제공하는 훌륭한 애플리케이션을 구축할 수 있도록 지원하고 있습니다.</p>
<p>Microsoft의 협업, 고객 성공, 클라우드 리더십에 대한 깊은 헌신은 MongoDB의 파트너 에코시스템에서 없어서는 안 될 부분입니다. 파트너십의 강점은 계속 강화되고 있습니다. 실제로 MongoDB는 최근 “통합 데이터 솔루션 플레이”의 Microsoft 파트너로 선정되었습니다. 이를 통해 고객은 <a href="https://www.mongodb.com/products/platform/atlas-database">MongoDB Atlas</a> on Azure와 네이티브 Microsoft 서비스 간의 공동 통합 및 시장 출시(GTM) 리소스를 활용할 수 있게 되었습니다.</p>
<h2>올해의 글로벌 AI 클라우드 파트너: Amazon Web Services(AWS)</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/amazon-web-services">AWS</a>는 고객이 MongoDB를 통해 AI의 잠재력을 최대한 발휘할 수 있도록 지원하는 원동력이 되어 왔으며, Amazon Bedrock과 MongoDB Atlas를 활용하여 가장 시간이 많이 걸리는 워크플로를 12주에서 10분으로 단축한 AI 솔루션을 빌드한 Novo Nordisk와의 협업이 대표적인 사례입니다. Novo Nordisk와의 협업은 생성형 AI 시대에 고객을 위한 비즈니스 차별화를 창출하는 파트너십의 힘을 보여주는 많은 사례 중 하나에 불과합니다.</p>
<p>또한 MongoDB는 AWS의 생성형 AI 역량 출시 파트너로 선정되어 AI 분야에서의 협력을 더욱 강화했습니다. 획기적인 생성형 AI 사용 사례와 그 이상을 통해 우리의 파트너십은 조직이 더 빠르게 움직이고, 더 과감하게 혁신하며, 자신감을 가지고 변혁할 수 있도록 지원합니다. AWS와 MongoDB는 함께 AI 시대에 가능성을 열어가고 있습니다.</p>
<h2>올해의 글로벌 클라우드 GTM 파트너: Google Cloud</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/google-cloud">Google Cloud</a>는 효과적인 공동 GTM 이니셔티브를 통해 새로운 사업을 가속화한 공로를 인정받았습니다. MongoDB와 Google Cloud의 파트너십은 세계에서 가장 복잡한 글로벌 엔터프라이즈에서 새로운 비즈니스를 창출하고 영향을 미치는 의미 있는 협업의 기준을 설정했습니다. Google Cloud와 MongoDB의 공동 영업 개발 담당자 프로그램은 이러한 성공의 주춧돌이 되었으며, 초기 단계의 인재들이 세계 최대 조직과 함께 일할 기회를 얻고, 그들의 커리어 전반에 걸쳐 유용하게 제공할 영업 전략을 배울 수 있도록 보장합니다. Google Cloud는 혁신적인 GTM 전략에 대한 공동의 노력 덕분에 MongoDB의 글로벌 성장의 주도적인 힘이 되고 있습니다.</p>
<h2>올해의 글로벌 시스템 통합자 파트너: Accenture</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/accenture-mainframe-modernization">Accenture</a>는 소프트웨어 엔지니어링 서비스 라인 내에 MongoDB를 위한 전담 우수 센터를 설립하여 글로벌 SI 파트너로서 탁월한 노력을 기울였습니다.</p>
<p>MongoDB와 Accenture는 함께 주요 은행의 결제 현대화부터 주요 제조업체의 데이터 전환에 이르기까지 산업 전반에 걸쳐 변혁적인 고객 성과를 제공했습니다. 한편, Accenture의 BFSI 사업부와의 긴밀한 협력은 계속해서 글로벌 고객 성공을 촉진하고 있습니다. MongoDB의 현대적인 데이터베이스 플랫폼과 Accenture의 깊은 산업 전문성을 결합함으로써, 당사의 파트너십은 고객이 현대화하고, 데이터 기반 인사이트를 발굴하며, 엔터프라이즈 규모에서 디지털 혁신을 가속화할 수 있도록 지속적으로 지원합니다.</p>
<h2>올해의 글로벌 공공 기관 파트너: Accenture Federal Services</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/accenture-federal-services">Accenture Federal Services</a>는 공공 부문에서 MongoDB의 존재감을 강화하는 데 중요한 역할을 해왔습니다. 확장성과 전문성, 고객 성과에 대한 집중 덕분에 매년 괄목할 만한 성장을 이뤘으며, MongoDB와 협력하여 중요한 정부 임무를 지원했습니다.</p>
<p>MongoDB와 Accenture Federal Services는 정부 기관이 효율성 목표를 달성할 수 있도록 레거시 애플리케이션을 현대화하고, 플랫폼을 원활하게 통합하며, 아키텍처를 간소화하면서 비용을 절감하는 데 도움을 주고 있습니다. 2026년 1월에 열리는 첫 번째 MongoDB 공공 기관 서밋의 주요 후원자로 Accenture Federal Services를 모시게 되어 기쁩니다.</p>
<h2>올해의 글로벌 테크 파트너: Confluent</h2>
<p>Apache Kafka®의 공동 개발자가 구축한 데이터 스트리밍 플랫폼인 <a href="https://cloud.mongodb.com/ecosystem/confluent">Confluent</a>는 550개 이상의 공동 고객 배포를 통해 전 세계 산업에 영향을 미치며 전략적 파트너로 계속 활동하고 있습니다. 지난 한 해 동안 MongoDB와 Confluent는 글로벌 시장 진출(GTM) 조율을 강화하여 EMEA 및 APAC 지역에서 공동 판매 참여를 가속화하는 데 주력했습니다.</p>
<p>MongoDB와 Confluent는 고객이 데이터 이동 중 혁신을 가속화하고 이벤트 기반 AI 애플리케이션을 빌드할 수 있도록 생성형 AI 퀵스타트, 노코드 스트리밍 데모, 에이전트 AI 사고 리더십을 공동으로 제공해 왔습니다. 우리의 파트너십은 지속적인 공동 후원 AI 워크숍과 개발자 참여 이벤트를 통해 강력한 필드 협업을 기반으로 합니다. GTM 협업의 하이라이트는 Confluent 및 LangChain과 공동으로 진행한 생성형 AI 개발자 데이로, 80명 이상의 개발자가 참여하여 양사의 통합 플랫폼이 어떻게 비용 효율적이고 설명 가능한 개인화된 멀티 에이전트 시스템을 구현하는지를 보여줬습니다.</p>
<h2>올해의 글로벌 ISV 파트너: BigID</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/bigid-data-intelligence-platform">BigID</a>는 금융 서비스, 보험, 의료 분야에서 고객에게 지속적으로 강력한 성과를 제공하며 MongoDB의 뛰어난 ISV 파트너로 자리매김했습니다. 함께 고객 이벤트에서 성장 기회를 가속화하는 맞춤형 인센티브 프로그램에 이르기까지 영향력 있는 공동 GTM 이니셔티브를 시작했습니다. BigID는 데이터 보안, 개인정보 보호 및 AI 데이터 관리 분야의 선두주자로 계속 인정받고 있으며, 전 세계적인 긴밀한 협력을 통해 규제가 엄격한 산업에서 활동하는 조직의 신뢰할 수 있는 파트너로서 MongoDB의 입지를 더욱 강화하고 있습니다.</p>
<h2>올해의 글로벌 AI 기술 파트너: LangChain</h2>
<p>MongoDB와 <a href="https://cloud.mongodb.com/ecosystem/langchain">LangChain</a>의 파트너십은 개발자들이 MongoDB에서 검색 증강 생성(RAG) 애플리케이션과 지능형 에이전트를 보다 쉽게 구축할 수 있도록 강력한 새로운 통합 기능을 제공합니다.</p>
<p>하이브리드 검색과 상위 문서 검색부터 단기 및 장기 메모리 역량에 이르기까지, 이러한 공동 솔루션은 개발자가 AI로 가능성의 경계를 확장하도록 돕고 있습니다. 공동 워크숍, 웨비나 및 실습 교육을 통해 개발자들이 이러한 역량을 확장할 수 있도록 도구와 지식을 제공했습니다. 모멘텀은 계속 빠르게 빌드되고 있으며, LangChain/MongoDB 및 LangGraph/MongoDB 패키지의 채택이 계속 증가하고 있습니다. 이는 MongoDB와 LangChain이 함께 조성하고 있는 개발자 에코시스템과 번창하는 협업의 강점을 강조합니다.</p>
<h2>年度全球 AI SI 合作伙伴：Pureinsights</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/pureinsights">Pureinsights</a>는 강력한 Discovery Platform을 통해 지능형 검색 및 AI 애플리케이션 개발을 가속화합니다. <a href="https://www.mongodb.com/products/platform/ai-search-and-retrieval">Voyage AI by MongoDB</a>와의 통합을 통해 고급 임베딩, 멀티모달 임베딩, 결과 재랭킹을 제공하며 엔터프라이즈급 사용 사례에서 강력한 증명점 추적 기록과 차별화된 가치를 인정받고 있는 점도 돋보이는 역량입니다. Pureinsights는 생성형 AI, 벡터 검색, RAG 사용 사례 구현에 중점을 두고 클라이언트가 신속하고 안정적으로, 확장할 수 있도록 지속적으로 지원합니다.</p>
<h2>올해의 글로벌 현대화 파트너: gravity9</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/gravity9">gravity9</a>은 AI를 활용하여 여러 산업과 지역에 걸쳐 현대화 및 도약 프로젝트를 일관적으로 수행함으로써 신뢰할 수 있는 MongoDB 파트너로 자리매김했습니다. 전략적 구현 파트너인 Gravity9은 조직이 레거시 시스템을 현대화하고, 새로운 기술을 채택하고, 가치 창출 시간을 가속화하고, AI 시대에 대비할 수 있도록 지원하는 확장 가능한 클라우드 네이티브 솔루션을 설계 및 제공하는 것을 전문으로 합니다. gravity9는 심층적인 기술 전문성과 애자일 제공 모델을 결합하여 고객이 워크로드를 클라우드로 이동하거나, 새로운 AI 경험을 구축하거나, 기존 인프라를 최적화하는 등 혁신의 기회를 열 수 있도록 지원합니다. gravity9과 MongoDB의 Professional Services 팀과의 긴밀한 협업은 지속적으로 높은 고객 평가를 받아 그들의 작업 품질과 신뢰성을 입증하고 있습니다.</p>
<h2>올해의 글로벌 임팩트 파트너: IBM</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/mongodb-on-ibm-z-and-linuxone">IBM</a>는 업계를 선도하는 다양한 대형 클라이언트에서 전략적으로 기여한 공로를 인정받아 올해의 임팩트 파트너 상을 수상하게 되었습니다. IBM은 여러 다국적 금융 기관과 대규모 계약을 체결하는 데 중요한 역할을 했으며, 전 세계적으로 파트너십을 확대하기 위해 더 많은 투자를 하고 있습니다. 파트너십은 계속해서 성장하고 있으며, Atlas 및 Watsonx.ai와의 협력을 포함하여 IBM Z Systems 또는 LinuxOne 인프라에서 차별화된 프로젝트의 수가 증가하고 있습니다. IBM은 대기업의 신뢰받는 공급업체이며, MongoDB의 최대 고객 중 25% 이상과 전략적 파트너 관계를 맺고 있습니다.</p>
<h2>올해의 글로벌 클라우드 - 인증 DBaaS 파트너: Alibaba</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/alibaba-cloud">Alibaba Cloud</a>는 MongoDB용 ApsaraDB로 혁신을 주도하고 AI를 활용하여 조직이 최신 애플리케이션을 빌드할 수 있도록 지원함으로써 전략적 MongoDB 파트너로 자리매김했습니다. 게임, 자동차, 소매, 핀테크와 같은 주요 분야에 중점을 둔 Alibaba Cloud는 엔터프라이즈가 더 빠르게 현대화하고 다양한 산업에서 새로운 기회를 창출할 수 있도록 지원합니다. Alibaba Cloud는 최첨단 데이터 솔루션과 과감한 글로벌 확장 전략을 결합하여 전 세계 고객이 디지털 플랫폼을 확장하고, 새로운 고객 경험을 제공하며, 미션 크리티컬 워크로드를 최적화하여 혁신을 가속화할 수 있도록 지원합니다.</p>
<h2>미래 전망</h2>
<p>2025년 글로벌 파트너 어워드 수상자 모두 축하드립니다! 혁신, 협업, 고객 성공을 위한 이들의 노력은 전 세계 조직에 지속적인 영향을 미쳤으며 앞으로도 계속될 것입니다. 이러한 상은 지난 한 해의 성과를 인정할 뿐만 아니라, 파트너와 함께 미래에 함께 빌드할 MongoDB의 비전을 강조합니다.</p>
<div class="callout">
<p>MongoDB 파트너 프로그램에 대해 자세히 알아보려면 <a href="https://www.mongodb.com/partners?tck=partner_awards_blog_2024">파트너 페이지</a>를 방문하세요.</p>
</div>	]]></description>
      <pubDate>Thu, 18 Sep 2025 00:59:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/news/celebrating-excellence-mongodb-global-partner-awards-2025-kr</link>
      <guid>https://www.mongodb.com/company/blog/news/celebrating-excellence-mongodb-global-partner-awards-2025-kr</guid>
    </item><item>
      <title>Celebrando la Excelencia: Premios Globales de Emparejar de MongoDB 2025</title>
      <description><![CDATA[<p>En un mundo que está siendo remodelado por la IA y los rápidos cambios tecnológicos, una cosa está clara: nuestros Emparejar están impulsando el futuro con MongoDB. Juntos, ayudamos a los clientes a modernizar los sistemas heredados, a resolver desafíos que van desde la seguridad hasta las restricciones presupuestarias, y a compilar la siguiente ola de aplicaciones basadas en IA.</p>
<p>Por eso nos enorgullece anunciar los premios anuales MongoDB Global Emparejar Awards, que celebran a los Emparejar que lideraron el camino en 2025. Desde ser pioneros en la IA y la modernización, hasta avanzar en la innovación del sector público y construir colaboraciones audaces para el mercado, estos Emparejar establecen el estándar de excelencia. Su liderazgo no solo marca la diferencia, sino que redefine lo que es posible.</p>
<h2>Emparejar Global de Cloud del Año: Microsoft</h2>
<p>Nos enorgullece reconocer a <a href="https://cloud.mongodb.com/ecosystem/microsoft-power-platform-mongodb-connector">Microsoft</a> por su excepcional crecimiento interanual como Emparejar Global en la Cloud del Año de MongoDB. Juntos, MongoDB y Microsoft han impulsado un fuerte avance en sectores como la atención médica, las telecomunicaciones y los servicios financieros, ayudando a las Organizaciones a compilar grandes aplicaciones que ofrecen experiencias excepcionales al cliente.</p>
<p>El profundo compromiso de Microsoft con la colaboración, el éxito del cliente y el liderazgo en la nube lo convierte en una parte indispensable del ecosistema de Emparejar de MongoDB. La fortaleza de la asociación sigue creciendo; de hecho, MongoDB fue seleccionado recientemente como Emparejar de Microsoft para un “Unificar su solución de datos,” lo que permite a los clientes beneficiarse de las integraciones conjuntas y los Recursos de comercialización (GTM) entre <a href="https://www.mongodb.com/products/platform/atlas-database">MongoDB Atlas</a> en Azure y los servicios nativos de Microsoft.</p>
<h2>Socio global de IA en Cloud del año: Amazon Web Services (AWS)</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/amazon-web-services">AWS</a> ha sido una fuerza impulsora en ayudar a los clientes a desbloquear todo el potencial de la IA con MongoDB, destacado por nuestro trabajo con Novo Nordisk, que aprovechó Amazon Bedrock y MongoDB Atlas para compilar una solución de IA que redujo uno de sus flujos de trabajo más intensivos en tiempo de 12 semanas a 10 minutos. El trabajo con Novo Nordisk es solo un ejemplo de muchos que demuestra el poder de nuestra colaboración para crear una diferenciación empresarial para los clientes en la era de la IA generativa.</p>
<p>MongoDB también fue un Emparejar de lanzamiento de la competencia de IA generativa para AWS, lo que estrecha aún más nuestra colaboración en IA. Desde casos de uso revolucionarios de IA generativa y más allá, nuestra asociación empodera a las organizaciones para moverse más rápido, innovar con más audacia y transformarse con confianza. Juntos, AWS y MongoDB están moldeando lo que es posible en la era de la IA.</p>
<h2>Emparejar global del año de GTM en Cloud: Google Cloud</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/google-cloud">Google Cloud</a> está siendo honrado por acelerar nuevos negocios a través de iniciativas conjuntas de GTM impactantes. La asociación de MongoDB con Google Cloud ha establecido el estándar para una colaboración significativa, impulsando nuevos negocios y generando impacto en algunas de las empresas globales más complejas del mundo. El programa conjunto de Representante de Desarrollo de Ventas de Google Cloud y MongoDB ha sido la piedra angular de este éxito, asegurando que los talentos en etapa inicial tengan la oportunidad de trabajar con las organizaciones más grandes del mundo mientras aprenden un manual de ventas que les servirá bien para el resto de su carrera. Google Cloud sigue siendo una fuerza impulsora en el crecimiento global de MongoDB gracias a su compromiso conjunto con estrategias innovadoras de GTM.</p>
<h2>Emparejar integrador de sistemas global del año: Accenture</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/accenture-mainframe-modernization">Accenture</a> ha demostrado un compromiso excepcional como Emparejar Global de SI, estableciendo un centro de excelencia dedicado para MongoDB dentro de su línea de servicios de ingeniería de software.</p>
<p>Juntos, MongoDB y Accenture han proporcionado resultados transformadores para los clientes en diversas industrias, desde la modernización de pagos para un banco líder hasta la transformación de datos para un importante fabricante. Mientras tanto, una colaboración más estrecha con la unidad de negocio BFSI de Accenture ha seguido impulsando el éxito global del cliente. Al combinar la moderna plataforma de base de datos de MongoDB con la profunda experiencia de Accenture en la industria, nuestra colaboración sigue ayudando a los clientes a modernizar, desbloquear perspectivas basadas en datos y acelerar la transformación digital a escalar empresarial.</p>
<h2>Emparejar del Sector Público Global del Año: Accenture Federal Services</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/accenture-federal-services">Accenture Federal Services</a> ha desempeñado un Rol fundamental en el avance de la presencia de MongoDB en el sector público. Gracias a su escalar, experiencia y enfoque en los resultados para el cliente, ha impulsado un crecimiento notable año tras año y ha respaldado misiones críticas del gobierno en coordinación con MongoDB.</p>
<p>MongoDB y Accenture Federal Services están ayudando a las agencias de gobierno a cumplir sus objetivos de eficiencia mediante la modernización de aplicaciones heredadas, la consolidación fluida de plataformas y la optimización de arquitecturas, todo ello mientras se reducen los costos. Nos complace contar con Accenture Federal Services como patrocinador principal de nuestra Cumbre inaugural del Sector Público de MongoDB en enero de 2026.</p>
<h2>Emparejar tecnológico global del año: Confluent</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/confluent">Confluent</a>, la plataforma de transmisión de datos compilada por los cocreadores de Apache Kafka®, sigue siendo un Emparejar estratégico con más de 550 implementaciones conjuntas de cliente que generan impacto en diversas industrias a nivel mundial. Durante el último año, MongoDB y Confluent han fortalecido la alineación global de comercialización (GTM), enfocándose en acelerar la interacción de co-venta en EMEA y APAC.</p>
<p>Juntos, MongoDB y Confluent han proporcionado inicios rápidos de IA generativa, demostraciones de transmisión sin código y han coescrito liderazgo de pensamiento en IA agencial para ayudar a los clientes a acelerar la innovación con datos en movimiento y compilar aplicaciones de IA impulsadas por eventos. Nuestra asociación está anclada en una sólida colaboración de campo, con talleres de IA en curso copatrocinados y eventos prácticos para desarrolladores. Un aspecto destacado de nuestra colaboración en GTM fue un Día del Desarrollador de IA Generativa conjunto con Confluent y LangChain, donde los líderes de IA involucraron a más de 80 desarrolladores para demostrar cómo nuestras plataformas combinadas permiten sistemas multiagente de costo efectivo, explicables y personalizados.</p>
<h2>Emparejar ISV Global del Año: BigID</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/bigid-data-intelligence-platform">BigID</a> ha seguido siendo un destacado Emparejar ISV para MongoDB, ofreciendo consistentemente resultados sólidos para los cliente en los sectores de servicios financieros, seguros y atención médica. Juntos, hemos lanzado iniciativas conjuntas de GTM impactantes, desde eventos para clientes hasta programas de incentivos personalizados que han acelerado las oportunidades de crecimiento. BigID sigue siendo reconocido como líder en seguridad de datos, privacidad y gestión de datos de IA y, gracias a nuestra estrecha alineación global, está fortaleciendo aún más la posición de MongoDB como un Emparejar de confianza para las Organizaciones que operan en industrias altamente reguladas.</p>
<h2>Emparejar global de tecnología de IA del año: LangChain</h2>
<p>La asociación de MongoDB con <a href="https://cloud.mongodb.com/ecosystem/langchain">LangChain</a> ha desbloqueado nuevas y poderosas integraciones que facilitan a los desarrolladores la compilación de aplicaciones de Generación de recuperación aumentada (RAG) y agentes inteligentes en MongoDB.</p>
<p>Desde la búsqueda híbrida y los recuperadores de documentos padre hasta las capacidades de memoria a corto y largo término, estas soluciones conjuntas están ayudando a los desarrolladores a ampliar los límites de lo que es posible con la IA. A través de talleres conjuntos, seminarios web y entrenamiento práctico, hemos equipado a los desarrolladores con las herramientas y el conocimiento para adoptar estas capacidades a gran escalar. El impulso continúa compilándose rápidamente, y la adopción de los Paquetes LangChain/MongoDB y LangGraph/MongoDB continúa creciendo, destacando la fortaleza de nuestra colaboración y el próspero ecosistema de desarrolladores que MongoDB y LangChain están fomentando juntos.</p>
<h2>Emparejar global del año de IA SI: Pureinsights</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/pureinsights">Pureinsights</a> acelera la búsqueda inteligente y el desarrollo de aplicaciones de IA con su potente plataforma Discovery. Una capacidad destacada es su integración con <a href="https://www.mongodb.com/products/platform/ai-search-and-retrieval">Voyage IA de MongoDB</a>, que ofrece embeddings avanzados, embeddings multimodales y reclasificación de resultados, ganando reconocimiento por su sólida trayectoria de puntos de prueba y valor diferenciado en casos de uso de nivel empresarial. Con un enfoque en implementar IA generativa, búsqueda vectorial y casos de uso de RAG, Pureinsights sigue empoderando a los clientes para que innoven de manera rápida, confiable y a gran escala.</p>
<h2>Emparejar Global de modernización del Año: gravity9</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/gravity9">gravity9</a> se ha establecido como un Emparejar de confianza de MongoDB al ofrecer un impacto coherente a través de Proyectos de modernización y aceleración en diversas industrias y geografías, impulsados por IA. Como Emparejar de implementación estratégica, gravity9 se especializa en diseñar y ofrecer soluciones nativa de la nube y escalable que ayudan a las Organización a modernizar los sistemas heredada, adoptar nuevas tecnologías, acelerar el tiempo de creación de valor y prepararse para la era de la IA. Al combinar una profunda experiencia técnica con un modelo de entrega agile, gravity9 permite a los clientes desbloquear oportunidades de transformación, ya sea trasladando cargas de trabajo a la nube, creando nuevas experiencias de IA u optimizando la infraestructura existente. La estrecha colaboración de gravity9 con los equipos de Professional Services de MongoDB ha generado calificaciones consistentemente altas de los clientes, demostrando la calidad y fiabilidad de su trabajo.</p>
<h2>Emparejar Global de impacto del Año: IBM</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/mongodb-on-ibm-z-and-linuxone">IBM</a> está siendo reconocida por el premio Emparejar del Año por sus contribuciones estratégicas a una variedad de grandes clientes líderes en la industria. IBM ha desempeñado un Rol fundamental en la obtención de grandes contratos con varias instituciones financieras multinacionales y está invirtiendo más en la expansión de la colaboración a nivel mundial. La asociación sigue creciendo, incluyendo con Atlas &amp; Watsonx.IA, y un número cada vez mayor de Proyectos diferenciados en la infraestructura de IBM Z Systems o LinuxOne. IBM es un proveedor de confianza para las grandes empresas y es un Emparejar estratégico en más del 25% de los clientes más grandes de MongoDB.</p>
<h2>Global Cloud - Emparejar certificado de DBaaS del año: Alibaba</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/alibaba-cloud">Alibaba Cloud</a> se ha consolidado como un Emparejar estratégico de MongoDB al impulsar la innovación con ApsaraDB para MongoDB y utilizar la IA para ayudar a las Organizaciones a compilar aplicaciones modernas. Con un fuerte enfoque en verticales clave como juegos, automoción, venta minorista y fintech, Alibaba Cloud está permitiendo a las empresas modernizarse más rápidamente y desbloquear nuevas oportunidades en todas las industrias. Al combinar soluciones de datos de vanguardia con una audaz estrategia de expansión global, Alibaba Cloud permite a los clientes de todo el mundo acelerar la transformación, ya sea escalando plataformas digitales, brindando nuevas experiencias a los clientes u optimizando las cargas de trabajo críticas para la misión.</p>
<h2>Mirando hacia adelante</h2>
<p>¡Felicidades a todos los ganadores de los Global Emparejar Awards 2025! Su compromiso con la innovación, la colaboración y el éxito de los clientes ha tenido y tendrá un impacto duradero en las organizaciones de todo el mundo. Estos premios no solo reconocen los logros del año pasado, sino que también subrayan la visión de MongoDB de lo que nosotros, junto con nuestros Emparejar, compilaremos juntos en el futuro.</p>
<div class="callout">
<p><b>Para aprender más sobre el Programa de Emparejar de MongoDB, visite nuestra <a href="https://www.mongodb.com/partners?tck=partner_awards_blog_2024">página de emparejar</a>.</b></p>
</div>	]]></description>
      <pubDate>Thu, 18 Sep 2025 00:59:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/news/celebrating-excellence-mongodb-global-partner-awards-2025-es</link>
      <guid>https://www.mongodb.com/company/blog/news/celebrating-excellence-mongodb-global-partner-awards-2025-es</guid>
    </item><item>
      <title>Hommage à l’excellence : MongoDB Global Partner Awards 2025</title>
      <description><![CDATA[<p>Dans un monde en pleine transformation par l'IA et les changements technologiques rapides, une chose est certaine : nos partenaires représentent l'avenir grâce à MongoDB. Ensemble, nous aidons les clients à moderniser les systèmes hérités, à résoudre des défis allant de la sécurité aux contraintes budgétaires, et à construire la prochaine vague d'applications alimentées par l'IA.</p>
<p>« C'est pourquoi nous sommes fiers d'annoncer la cérémonie annuelle des MongoDB Global Partner Awards, célébrant les partenaires qui ont ouvert la voie en 2025. » Qu'il s'agisse d’innovation en matière d'IA et de modernisation, de faire progresser l'innovation dans le secteur public ou de mettre en place des collaborations audacieuses pour la mise sur le marché, ces partenaires définissent la norme en matière d'excellence. Leur leadership ne fait pas que faire bouger les choses : il redéfinit ce qui est possible.</p>
<h2>Partenaire cloud mondial de l'année : Microsoft</h2>
<p>Nous sommes fiers de reconnaître <a href="https://cloud.mongodb.com/ecosystem/microsoft-power-platform-mongodb-connector">Microsoft</a> pour sa croissance exceptionnelle d'année en année en tant que Partenaire cloud mondial de l’année de MongoDB. Ensemble, MongoDB et Microsoft ont généré une forte dynamique dans des secteurs tels que la santé, les télécommunications et les services financiers, aidant les entreprises à créer d'excellentes applications qui offrent des expériences client exceptionnelles.</p>
<p>L'engagement profond de Microsoft en faveur de la collaboration, du succès client et du leadership dans le cloud en fait un élément indispensable de l'écosystème de partenaires de MongoDB. La force du partenariat continue de croître. En fait, MongoDB a récemment été sélectionné comme partenaire Microsoft pour une « solution unifiée pour vos données », qui permet aux clients de bénéficier des intégrations conjointes et des ressources de mise sur le marché entre <a href="https://www.mongodb.com/products/platform/atlas-database">MongoDB Atlas</a> sur Azure et les services Microsoft natifs.</p>
<h2>Partenaire international de l'année pour le cloud IA : Amazon Web Services (AWS)</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/amazon-web-services">AWS</a> a été un moteur pour aider les clients à exploiter tout le potentiel de l'IA avec MongoDB, illustré par notre collaboration avec Novo Nordisk, qui a utilisé Amazon Bedrock et MongoDB Atlas pour créer une solution d'IA réduisant un de leurs processus les plus chronophages de 12 semaines à 10 minutes. Le travail avec Novo Nordisk n'est qu'un exemple parmi tant d'autres qui démontre la puissance de notre partenariat pour créer une différenciation commerciale pour les clients à l'ère de l'IA générative.</p>
<p>MongoDB a également été un partenaire de lancement pour la compétence d’IA générative d'AWS, renforçant ainsi notre collaboration en IA. Des cas d'utilisation révolutionnaires de l'IA générative et au-delà, notre partenariat permet aux entreprises de progresser plus rapidement, d'innover plus audacieusement et de se transformer en toute confiance. Ensemble, AWS et MongoDB façonnent ce qui est possible à l'ère de l'IA.</p>
<h2>Partenaire international de l'année pour la mise sur le marché du cloud : Google Cloud</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/google-cloud">Google Cloud</a> est honoré pour avoir accéléré de nouvelles activités grâce à des initiatives de mise sur le marché conjointes percutantes. Le partenariat de MongoDB avec Google Cloud a défini la norme en matière de collaboration significative, générant de nouvelles activités et produisant un impact sur certaines des entreprises internationales les plus complexes. Le programme conjoint de représentants du développement commercial de Google Cloud et MongoDB a été la pierre angulaire de ce succès, garantissant aux jeunes talents la possibilité de travailler avec les plus grandes entreprises du monde tout en apprenant un manuel de vente qui leur sera utile pour le reste de leur carrière. Google Cloud continue d'être une force motrice dans la croissance internationale de MongoDB grâce à son engagement commun dans des stratégies de mise sur le marché innovantes.</p>
<h2>Partenaire international de l'année pour l’intégration des systèmes : Accenture</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/accenture-mainframe-modernization">Accenture</a> a démontré un engagement exceptionnel en tant que partenaire SI international, en établissant un centre d'excellence dédié à MongoDB au sein de sa ligne de services d'ingénierie logicielle.</p>
<p>Ensemble, MongoDB et Accenture ont apporté des résultats clients transformateurs dans tous les secteurs, de la modernisation des paiements pour une banque de premier plan à la transformation des données pour un grand fabricant. Parallèlement, une collaboration plus étroite avec l'unité commerciale BFSI d'Accenture a continué à favoriser le succès client à l'échelle mondiale. « En combinant la plateforme de base de données moderne de MongoDB avec l'expertise sectorielle approfondie d'Accenture, notre partenariat continue d'aider les clients à se moderniser, à libérer des informations basées sur les données et à accélérer la transformation numérique à l’échelle de l’entreprise. »</p>
<h2>Partenaire international de l'année pour le secteur public : Accenture Federal Services</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/accenture-federal-services">Accenture Federal Services</a> a joué un rôle essentiel dans le développement de la présence de MongoDB dans le secteur public. Grâce à sa taille, son expertise et son accent sur les résultats pour les clients, elle a enregistré une croissance remarquable d'année en année et a soutenu des missions gouvernementales essentielles en coordination avec MongoDB.</p>
<p>MongoDB et Accenture Federal Services aident les agences gouvernementales à atteindre leurs objectifs d'efficacité en modernisant les applications existantes, en consolidant de manière transparente les plateformes et en rationalisant les architectures, tout en réduisant les coûts. Nous sommes ravis qu'Accenture Federal Services soit l'un des principaux sponsors de notre premier MongoDB Public Sector Summit, qui se tiendra en janvier 2026.</p>
<h2>Partenaire technologique mondial de l'année : Confluent</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/confluent">Confluent</a>, la plateforme de diffusion de données créée par les co-créateurs d'Apache Kafka®, reste un partenaire stratégique avec plus de 550 déploiements conjoints de clients ayant un impact sur les secteurs du monde entier. L'année dernière, MongoDB et Confluent ont renforcé l'alignement mondial de la mise sur le marché, en se concentrant sur l'accélération de l'engagement de co-vente dans la région Europe-Proche-Orient et Asie-Pacifique.</p>
<p>Ensemble, MongoDB et Confluent ont fourni des guides de démarrage rapide de l’IA générative, des démos de streaming sans code et un leadership éclairé en IA agentique co-écrit pour aider les clients à accélérer l’innovation grâce aux données en mouvement et à créer des applications d’IA basées sur les évènements. Notre partenariat est ancré dans une solide collaboration sur le terrain, avec des ateliers d’IA co-sponsorisés et des évènements pratiques pour les développeurs. L’un des points forts de notre collaboration de mise sur le marché a été une journée conjointe des développeurs d’IA générative avec Confluent et LangChain, où les leaders de l’IA ont engagé plus de 80 développeurs pour montrer comment nos plateformes combinées permettent des systèmes multi-agents rentables, explicables et personnalisés.</p>
<h2>Partenaire ISV mondial de l'année : BigID</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/bigid-data-intelligence-platform">BigID</a> est resté un partenaire ISV de premier plan pour MongoDB, obtenant régulièrement de bons résultats pour ses clients dans les domaines des services financiers, de l'assurance et de la santé. Ensemble, nous avons lancé des initiatives conjointes et percutantes de mise sur le marché, allant des évènements clients aux programmes d’incitation sur mesure qui ont accéléré les possibilités de croissance. BigID continue d'être reconnu comme un leader en matière de sécurité des données, de confidentialité et de gestion des données d'IA, et grâce à notre alignement international étroit, renforce encore la position de MongoDB en tant que partenaire de confiance pour les entreprises opérant dans des secteurs hautement réglementés.</p>
<h2>Partenaire technologique international de l'année en matière d'IA : LangChain</h2>
<p>Le partenariat entre MongoDB et <a href="https://cloud.mongodb.com/ecosystem/langchain">LangChain</a> a débloqué de nouvelles intégrations puissantes qui permettent aux développeurs de créer plus facilement des applications de génération augmentée par récupération et des agents intelligents sur MongoDB.</p>
<p>Des recherches hybrides et des récupérateurs de documents parents aux capacités de mémoire à court et à long terme, ces solutions conjointes aident les développeurs à repousser les limites du possible avec l'IA. Grâce à des ateliers conjoints, des webinaires et des formations pratiques, nous avons doté les développeurs des outils et des connaissances nécessaires pour faire évoluer ces capacités. La dynamique se renforce rapidement, et l'adoption des packages LangChain/MongoDB et LangGraph/MongoDB ne cesse de croître, soulignant la force de notre collaboration et l'écosystème de développement en plein essor que MongoDB et LangChain permettent conjointement.</p>
<h2>Partenaire international de l’année pour la SI IA : Pureinsights</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/pureinsights">Pureinsights</a>  accélère la recherche intelligente et le développement d'applications d'IA grâce à sa puissante plateforme de découverte. L'une des fonctionnalités les plus remarquables est son intégration à Voyage AI de MongoDB, qui permet des intégrations avancées, une intégration multimodale et un reclassement des résultats. Elle est reconnue pour ses solides antécédents en matière de points de preuve et sa valeur différenciée dans les cas d'utilisation destinés aux entreprises. « En mettant l'accent sur la mise en œuvre de l'IA générative, la recherche vectorielle et les cas d'utilisation de génération augmentée par récupération, Pureinsights continue de permettre à ses clients d'innover rapidement, de manière fiable et d’évoluer. »</p>
<h2>Partenaire mondial de l'année pour la modernisation : gravity9</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/gravity9">gravity9</a> s'est imposé comme un partenaire de confiance de MongoDB en offrant un impact cohérent grâce à des projets de modernisation et de démarrage dans les secteurs et les zones géographiques, alimentés par l'IA. En tant que partenaire de mise en œuvre stratégique, gravity9 se spécialise dans la conception et la fourniture de solutions cloud natives et évolutives qui aident les entreprises à moderniser leurs systèmes existants, à adopter de nouvelles technologies, à accélérer la rentabilisation et à se préparer à l'ère de l'IA. En associant une expertise technique approfondie à un modèle de livraison agile, gravity9 permet aux clients de débloquer des potentialités de transformation, qu'il s'agisse de déplacer des charges de travail vers le cloud, de créer de nouvelles expériences d'IA ou d'optimiser l'infrastructure existante. L'étroite collaboration de gravity9 avec les équipes de services professionnels de MongoDB a généré des notes élevées de la part des clients, démontrant la qualité et la fiabilité de leur travail.</p>
<h2>Partenaire de l'année pour l’impact international : IBM</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/mongodb-on-ibm-z-and-linuxone">IBM</a> est reconnu pour le prix du Partenaire de l'année pour l’impact pour ses contributions stratégiques auprès de nombreux grands clients de premier plan du secteur. IBM a joué un rôle essentiel dans l'obtention de contrats importants avec plusieurs institutions financières multinationales et investit davantage dans l'extension du partenariat à l'échelle internationale. Le partenariat continue de se développer, notamment avec Atlas et Watsonx.ai, et de plus en plus de projets différenciés sur l'infrastructure IBM Z Systems ou LinuxONE. IBM est un fournisseur de confiance pour les grandes entreprises et un partenaire stratégique pour plus de 25 % des plus gros clients de MongoDB.</p>
<h2>Global Cloud - Partenaire DBaaS certifié de l'année : Alibaba</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/alibaba-cloud">Alibaba Cloud</a> s’est imposé comme un partenaire stratégique de MongoDB en stimulant l’innovation avec ApsaraDB pour MongoDB et en utilisant l’IA pour aider les entreprises à créer des applications modernes. En mettant l’accent sur les principaux secteurs verticaux tels que les jeux, l’automobile, la vente au détail et la fintech, Alibaba Cloud permet aux entreprises de se moderniser plus rapidement et de débloquer de nouvelles possibilités dans tous les secteurs. En combinant des solutions de données de pointe avec une stratégie d’expansion internationale audacieuse, Alibaba Cloud permet aux clients du monde entier d’accélérer leur transformation, qu’il s’agisse de mettre à l'échelle des plateformes numériques, d’offrir de nouvelles expériences client ou d’optimiser des charges de travail critiques.</p>
<h2>Perspectives d'avenir</h2>
<p>Félicitations à tous les lauréats des Global Partner Awards 2025 ! Leur engagement envers l'innovation, la collaboration et le succès client a, et aura, un impact durable sur les entreprises du monde entier. Ces prix ne reconnaissent pas seulement les réalisations de l'année écoulée, mais soulignent également la vision de MongoDB pour ce que nous, ainsi que nos partenaires, construirons ensemble à l'avenir.</p>
<div class="callout">
<p><b>Pour en savoir plus sur le programme partenaire MongoDB, rendez-vous sur <a href="https://www.mongodb.com/partners?tck=partner_awards_blog_2025">notre page partenaires</a>.</b></p>
</div>	]]></description>
      <pubDate>Thu, 18 Sep 2025 00:59:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/news/celebrating-excellence-mongodb-global-partner-awards-2025-fr</link>
      <guid>https://www.mongodb.com/company/blog/news/celebrating-excellence-mongodb-global-partner-awards-2025-fr</guid>
    </item><item>
      <title>Começando a destacar a excelência: MongoDB GlobalPartner Services 2025</title>
      <description><![CDATA[<p>Em um mundo que está sendo remodelado pela IA e pelas rápidas mudanças tecnológicas, uma coisa é clara: nossos parceiros estão impulsionando o futuro com o MongoDB. Juntos, ajudamos os clientes a modernizar sistemas legados, resolver desafios que vão desde segurança até restrições orçamentárias, e criar a próxima onda de aplicações com tecnologia de IA.</p>
<p>É por isso que temos orgulho de fazer o anúncio do MongoDB Global Partner Awards anual — celebrando os parceiros que lideraram o caminho em 2025. Desde a liderança em IA e modernização até o avanço da inovação no setor público e a construção de colaborações ousadas para o mercado, esses parceiros estabelecem o padrão de excelência. A liderança deles não apenas move a agulha — ela redefine o que é possível.</p>
<h2>Parceiro Global de Cloud do Ano: Microsoft</h2>
<p>Temos o orgulho de reconhecer a <a href="https://cloud.mongodb.com/ecosystem/microsoft-power-platform-mongodb-connector">Microsoft</a> pelo excepcional crescimento ano a ano como parceiro Global de Cloud do ano do MongoDB. Juntas, MongoDB e Microsoft têm impulsionado fortemente setores como saúde, telecomunicações e serviços financeiros, ajudando organizações a desenvolver aplicações excepcionais que oferecem experiências incríveis aos clientes.</p>
<p>O profundo compromisso da Microsoft com a colaboração, o sucesso do cliente e a liderança em nuvem a torna uma parte indispensável do ecossistema de parceiros do MongoDB. A força da parceria continua a crescer; na verdade, o MongoDB foi recentemente selecionado como parceiro da Microsoft para um “Unify your data solution play”, que permite aos clientes aproveitar as integrações conjuntas e os recursos de go-to-market (GTM) entre o <a href="https://www.mongodb.com/products/platform/atlas-database">MongoDB Atlas</a> no Azure e os serviços nativos da Microsoft.</p>
<h2>Parceiro global de nuvem de IA do ano: Amazon Web Services (AWS)</h2>
<p>A <a href="https://cloud.mongodb.com/ecosystem/amazon-web-services">AWS</a> tem sido uma força motriz em ajudar os clientes a desbloquear todo o potencial da IA com o MongoDB, destacando-se pelo nosso trabalho com a Novo Nordisk, que utilizou o Amazon Bedrock e o MongoDB Atlas para desenvolver uma solução de IA que reduziu um de seus fluxos de trabalho mais demorados de 12 semanas para 10 minutos. O trabalho com a Novo Nordisk é apenas um exemplo de muitos que demonstra o poder da nossa parceria para criar diferenciação de negócios para clientes na era da IA generativa.</p>
<p>O MongoDB também foi parceiro de lançamento da Competência em IA Generativa para a AWS, estreitando ainda mais nossa colaboração em IA. Desde casos de uso revolucionários de IA generativa e além, nossa parceria capacita as organizações a se moverem mais rapidamente, inovarem com mais ousadia e se transformarem com confiança. Juntos, AWS e MongoDB estão moldando o que é possível na era da IA.</p>
<h2>Parceiro GTM global do ano em nuvem: Google Cloud</h2>
<p>O <a href="https://cloud.mongodb.com/ecosystem/google-cloud">Google Cloud</a> está sendo homenageado por acelerar novos negócios por meio de iniciativas conjuntas de GTM impactantes. A parceria do MongoDB com o Google Cloud estabeleceu o padrão para uma colaboração significativa—impulsionando novos negócios e gerando impacto em algumas das empresas globais mais complexas do mundo. O programa conjunto de Representantes de Desenvolvimento de Vendas do Google Cloud e do MongoDB tem sido a pedra angular desse sucesso, garantindo que talentos em estágio inicial tenham a oportunidade de trabalhar com as maiores organizações do mundo enquanto aprendem um manual de vendas que lhes servirá bem pelo resto de suas carreiras. O Google Cloud continua a ser uma força impulsionadora no crescimento global do MongoDB graças ao seu compromisso conjunto com estratégias inovadoras de GTM.</p>
<h2>Parceiro Global de Integração de Sistemas do Ano: Accenture</h2>
<p>A <a href="https://cloud.mongodb.com/ecosystem/accenture-mainframe-modernization">Accenture</a> demonstrou um compromisso excepcional como Parceiro Global de SI, estabelecendo um centro de excelência dedicado para o MongoDB dentro de sua linha de serviços de engenharia de software.</p>
<p>Juntos, MongoDB e Accenture entregaram resultados transformadores para clientes em diversos setores, desde a modernização de pagamentos para um banco líder até a transformação de dados para um grande fabricante. Enquanto isso, a colaboração mais estreita com a unidade de negócios BFSI da Accenture continuou a impulsionar o sucesso global dos clientes. Ao combinar a moderna plataforma MongoDB de banco de dados com a profunda experiência no setor da Accenture, nossa parceria continua a ajudar os clientes a se modernizarem, desbloquear perspicácia baseada em dados e acelerar a transformação digital em escala empresarial.</p>
<h2>Parceiro Global do Setor Público do Ano: Accenture Federal Services</h2>
<p>A <a href="https://cloud.mongodb.com/ecosystem/accenture-federal-services">Accenture Federal Services</a> desempenhou uma função fundamental no avanço da presença do MongoDB no setor público. Graças ao seu dimensionar, experiência e foco nos resultados do cliente, ela impulsionou um crescimento notável ano após ano e tem apoiado missões críticas do governo em coordenação com o MongoDB.</p>
<p>O MongoDB e a Accenture Federal Services estão ajudando as agências do governo a alcançar suas metas de eficiência, modernizando aplicativos legados, consolidando plataformas de forma integrada e simplificando arquiteturas, tudo isso enquanto reduzem custos. Estamos muito entusiasmados por ter a Accenture Federal Services como um dos principais patrocinadores do nosso primeiro MongoDB Public Sector Summit em janeiro de 2026.</p>
<h2>Parceiro Global de Tecnologia do Ano: Confluent</h2>
<p>A <a href="https://cloud.mongodb.com/ecosystem/confluent">Confluent</a> — a plataforma de streaming de dados criada pelos cocriadores do Apache Kafka® — continua a ser um parceiro estratégico com mais de 550 implantações conjuntas de clientes, gerando impacto em diversos setores ao redor do mundo. No último ano, MongoDB e Confluent fortaleceram o alinhamento global de go-to-market (GTM), focando na aceleração do engajamento de co-venda na EMEA e APAC.</p>
<p>Juntos, MongoDB e Confluent entregaram quickstarts de IA generativa, demonstrações de streaming sem código e co-autoria de liderança de pensamento em IA agentic para ajudar os clientes a acelerar a inovação com dados em movimento e construir aplicações de IA orientadas por eventos. Nossa parceria está ancorada em uma forte colaboração no campo, com workshops contínuos de IA co-patrocinados e eventos práticos para desenvolvedores. Um destaque de nossa colaboração GTM foi um Dia do Desenvolvedor de IA generativa conjunto com Confluent e LangChain, onde líderes de IA engajaram mais de 80 desenvolvedores para demonstrar como nossas plataformas combinadas permitem sistemas multiagentes de custo-benefício, explicáveis e personalizados.</p>
<h2>Parceiro Global do Ano ISV: BigID</h2>
<p>A <a href="https://cloud.mongodb.com/ecosystem/bigid-data-intelligence-platform">BigID</a> continua a ser um parceiro ISV de destaque para o MongoDB, entregando consistentemente resultados sólidos para clientes nos setores de serviços financeiros, seguros e saúde. Juntos, lançamos iniciativas conjuntas de GTM impactantes, desde eventos para clientes até programas de incentivo personalizados que aceleraram as oportunidades de crescimento. A BigID continua a ser reconhecida como líder em segurança de dados, privacidade e gerenciamento de dados de IA e, graças ao nosso estreito alinhamento global, está fortalecendo ainda mais a posição do MongoDB como parceiro confiável para organizações que operam em setores altamente regulamentados.</p>
<h2>Parceiro Global de Tecnologia de IA do Ano: LangChain</h2>
<p>A parceria do MongoDB com a <a href="https://cloud.mongodb.com/ecosystem/langchain">LangChain</a> desbloqueou novas integrações poderosas que facilitam para os desenvolvedores criar aplicações de RAG e agentes inteligentes no MongoDB.</p>
<p>Desde pesquisa híbrida e recuperadores de documentos principais até funcionalidades de memória de curto e longo termo, essas soluções conjuntas estão ajudando os desenvolvedores a ultrapassar os limites do que é possível com a IA. Por meio de workshops conjuntos, webinars e treinamento prático, equipamos os desenvolvedores com as FERRAMENTAS e o conhecimento para adotar essas funcionalidades em dimensionar. O momentum continua a criar rapidamente, e a adoção dos pacotes LangChain/MongoDB e LangGraph/MongoDB continua a aumentar, destacando a força da nossa colaboração e o próspero ecossistema de desenvolvedor que o MongoDB e a LangChain estão possibilitando juntos.</p>
<h2>Parceiro Global de IA SI do Ano: Pureinsights</h2>
<p>A <a href="https://cloud.mongodb.com/ecosystem/pureinsights">Pureinsights</a> acelera o desenvolvimento de aplicativos de pesquisa inteligente e de aplicações de IA com sua poderosa Plataforma de Descoberta. Uma funcionalidade de destaque é a integração com o <a href="https://www.mongodb.com/products/platform/ai-search-and-retrieval">Voyage IA da MongoDB</a>, que oferece embeddings avançados, embeddings multimodais e reclassificação de resultados, ganhando reconhecimento por seu sólido registro de ponto e valor diferenciado em casos de uso de nível empresarial. Com foco na implementação de IA generativa, pesquisa vetorial e casos de uso de RAG, a Pureinsights continua a capacitar os clientes a inovar de maneira rápida, confiável e em grande escala.</p>
<h2>Parceiro Global de Modernização do Ano: gravity9</h2>
<p>A <a href="https://cloud.mongodb.com/ecosystem/gravity9">gravity9</a> se estabeleceu como um parceiro confiável do MongoDB ao proporcionar impacto consistente por meio de projetos de modernização e aceleração em diversos setores e regiões, impulsionados por IA. Como parceiro estratégico de implementação, a gravity9 é especializada em projetar e fornecer soluções nativas da nuvem e dimensionáveis que ajudam as organizações a modernizar sistemas legados, adotar novas tecnologias, acelerar o tempo para valor e se preparar para a era da IA. Ao combinar uma profunda experiência técnica com um modelo de entrega Agile, a gravity9 permite que os clientes desbloqueiem oportunidades de transformação, seja transferindo cargas de trabalho para a nuvem, criando novas experiências de IA ou otimizando a infraestrutura existente. A estreita colaboração da gravity9 com as equipes de Professional Services do MongoDB gerou classificações de cliente consistentemente altas, demonstrando a qualidade e a confiabilidade do trabalho deles.</p>
<h2>Parceiro Global de Impacto do Ano: IBM</h2>
<p>A <a href="https://cloud.mongodb.com/ecosystem/mongodb-on-ibm-z-and-linuxone">IBM</a> está sendo reconhecida pelo prêmio de Parceiro de Impacto do Ano por suas contribuições estratégicas para uma variedade de grandes clientes líderes do setor. A IBM desempenhou uma função crucial na obtenção de grandes contratos com várias instituições financeiras multinacionais e está investindo mais na expansão da parceria globalmente. A parceria continua a crescer, inclusive com Atlas e Watsonx.IA, e um número cada vez maior de projetos diferenciados na infraestrutura IBM Z Systems ou LinuxOne. A IBM é um fornecedor confiável para grandes empresas e um parceiro estratégico em mais de 25% dos maiores clientes do MongoDB.</p>
<h2>Global Cloud - Parceiro Certificado do Ano em DBaaS: Alibaba</h2>
<p>A <a href="https://cloud.mongodb.com/ecosystem/alibaba-cloud">Alibaba Cloud</a> se estabeleceu como um parceiro estratégico do MongoDB ao promover a inovação com o ApsaraDB para MongoDB e utilizar IA para auxiliar as organizações na construção de aplicações modernas. Com um forte foco em verticais chave como Jogos, Automotivo, Varejo e Fintech, o Alibaba Cloud está permitindo que as empresas se modernizem mais rapidamente e desbloqueiem novas oportunidades em todos os setores. Ao combinar soluções de dados de ponta com uma estratégia ousada de expansão global, a Alibaba Cloud capacita clientes em todo o mundo a acelerar a transformação, seja dimensionamento de plataformas digitais, oferecendo novas experiências aos clientes ou otimizando cargas de trabalho críticas para a missão.</p>
<h2>Olhando para o futuro</h2>
<p>Parabéns a todos os vencedores do Prêmio Global de parceiros de 2025! O compromisso deles com a inovação, a colaboração e o sucesso do cliente tem — e terá — um impacto duradouro em organizações no mundo todo. Esses prêmios não apenas reconhecem as conquistas do ano passado, mas também destacam a visão do MongoDB para o que nós, junto com nossos parceiros, construiremos no futuro.</p>
<div class="callout">
<p><b>Para aprender mais sobre o Programa de Parceiros do MongoDB, visite nossa <a href="https://www.mongodb.com/partners?tck=partner_awards_blog_2025">página de parceiros</a>.</b></p>
</div>	]]></description>
      <pubDate>Thu, 18 Sep 2025 00:59:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/news/celebrating-excellence-mongodb-global-partner-awards-2025-br</link>
      <guid>https://www.mongodb.com/company/blog/news/celebrating-excellence-mongodb-global-partner-awards-2025-br</guid>
    </item><item>
      <title>Wir feiern Spitzenleistungen: MongoDB Global Partner Awards 2025</title>
      <description><![CDATA[<p>In einer Welt, die von künstlicher Intelligenz und schnellem technologischem Wandel geprägt ist, ist eines klar: Unsere Partner gestalten die Zukunft mit MongoDB. Gemeinsam helfen wir Kunden, Legacy-Systeme zu modernisieren, Herausforderungen von Sicherheits- bis hin zu Budgetbeschränkungen zu lösen und die nächste Welle von KI-gestützten Anwendungen zu entwickeln.</p>
<p>Deshalb sind wir stolz darauf, die jährlichen MongoDB Global Partner Awards anzukündigen, mit denen Partner ausgezeichnet werden, die im Jahr 2025 führend waren. Von bahnbrechender KI und Modernisierung über die Förderung von Innovationen im öffentlichen Sektor bis hin zum Aufbau mutiger Go-to-Market-Kooperationen setzen diese Partner den Maßstab für Exzellenz. Ihre Führungsqualitäten bewegen nicht nur etwas – sie definieren auch neu, was möglich ist.</p>
<h2>Globaler Cloud-Partner des Jahres: Microsoft</h2>
<p>Wir sind stolz darauf, <a href="https://cloud.mongodb.com/ecosystem/microsoft-power-platform-mongodb-connector">Microsoft</a> für sein außergewöhnliches Wachstum im Jahresvergleich als Globaler Cloud-Partner des Jahres von MongoDB auszuzeichnen. Gemeinsam haben MongoDB und Microsoft in Branchen wie Gesundheitswesen, Telekommunikation und Finanzdienstleistungen für eine starke Dynamik gesorgt und Unternehmen dabei geholfen, großartige Anwendungen zu entwickeln, die ein außergewöhnliches Kundenerlebnis bieten.</p>
<p>Microsofts starkes Engagement für Zusammenarbeit, Customer Success und Cloud-Führung macht es zu einem unverzichtbaren Teil der Partner-Umgebung von MongoDB. Die Stärke der Partnerschaft wächst weiter; tatsächlich wurde MongoDB kürzlich als Microsoft-Partner für ein „Unify your data solution play“ ausgewählt, das es Kunden ermöglicht, von den gemeinsamen Integrationen und Go-to-Market-Ressourcen (GTM) zwischen <a href="https://www.mongodb.com/products/platform/atlas-database">MongoDB Atlas</a> auf Azure und nativen Microsoft-Diensten zu profitieren.</p>
<h2>Globaler KI-Cloud-Partner des Jahres: Amazon Web Services (AWS)</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/amazon-web-services">AWS</a> war eine treibende Kraft, wenn es darum ging, Kunden zu helfen, das volle Potenzial von KI mit MongoDB auszuschöpfen. Dies wird durch unsere Arbeit mit Novo Nordisk unterstrichen, die Amazon Bedrock und MongoDB Atlas nutzten, um eine KI-Lösung zu entwickeln, die einen ihrer zeitintensivsten Arbeitsabläufe von 12 Wochen auf 10 Minuten verkürzte. Die Zusammenarbeit mit Novo Nordisk ist nur ein Beispiel von vielen, das die Stärke unserer Partnerschaft bei der Schaffung einer Geschäftsdifferenzierung für Kunden im Zeitalter der generativen KI verdeutlicht.</p>
<p>MongoDB war außerdem ein Launch-Partner für die generative KI-Kompetenz von AWS, was unsere Zusammenarbeit im Bereich KI weiter verstärkt. Von bahnbrechenden Anwendungsfällen generativer KI und darüber hinaus befähigt unsere Partnerschaft Organisationen, schneller voranzukommen, mutiger zu innovieren und mit Zuversicht zu transformieren. Gemeinsam gestalten AWS und MongoDB die Möglichkeiten im KI-Zeitalter.</p>
<h2>Globaler Cloud-GTM-Partner des Jahres: Google Cloud</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/google-cloud">Google Cloud</a> wird für die Beschleunigung neuer Geschäftsaktivitäten durch wirkungsvolle gemeinsame GTM-Initiativen ausgezeichnet. Die Partnerschaft zwischen MongoDB und Google Cloud hat den Maßstab für eine sinnvolle Zusammenarbeit gesetzt, die neue Geschäftsfelder ankurbelt und in einigen der komplexesten globalen Unternehmen der Welt Wirkung erzielt. Das gemeinsame Programm von Google Cloud und MongoDB für Vertriebsentwicklungsbeauftragte war der Grundstein für diesen Erfolg. Es gewährleistet, dass Nachwuchstalente die Möglichkeit erhalten, mit den größten Unternehmen der Welt zusammenzuarbeiten und gleichzeitig Vertriebsstrategien zu erlernen, die ihnen für den Rest ihrer Karriere von Nutzen sein werden. Dank des gemeinsamen Engagements für innovative GTM-Strategien bleibt Google Cloud weiterhin eine treibende Kraft für das globale Wachstum von MongoDB.</p>
<h2>Globaler Systemintegrator-Partner des Jahres: Accenture</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/accenture-mainframe-modernization">Accenture</a>  hat als Global SI Partner außergewöhnliches Engagement gezeigt und innerhalb seiner Servicelinie für Softwareentwicklung ein spezielles Kompetenzzentrum für MongoDB eingerichtet.</p>
<p>Gemeinsam haben MongoDB und Accenture transformative Kundenergebnisse in verschiedenen Branchen erzielt, von der Modernisierung des Zahlungsverkehrs für eine führende Bank bis hin zur Datentransformation für einen großen Hersteller. In der Zwischenzeit hat die engere Zusammenarbeit mit der BFSI-Geschäftseinheit von Accenture den weltweiten Customer Success weiter vorangetrieben. Durch die Kombination der modernen Datenbankplattform von MongoDB mit der umfassenden Branchenexpertise von Accenture unterstützt unsere Partnerschaft Kunden weiterhin bei der Modernisierung, der Erschließung datengestützter Erkenntnisse und der Beschleunigung der digitalen Transformation im großen Maßstab.</p>
<h2>Globaler Partner des Jahres für den öffentlichen Sektor: Accenture Federal Services</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/accenture-federal-services">Accenture Federal Services</a> hat eine entscheidende Rolle bei der Förderung der Präsenz von MongoDB im öffentlichen Sektor gespielt. Dank seiner Skalierung, seines Fachwissens und seiner Konzentration auf die Ergebnisse seiner Kunden konnte das Unternehmen gegenüber dem Vorjahr ein bemerkenswertes Wachstum erzielen und in Abstimmung mit MongoDB wichtige Regierungsaufgaben unterstützen.</p>
<p>MongoDB und Accenture Federal Services unterstützen Regierungsbehörden dabei, ihre Effizienzziele zu erreichen, indem sie Legacy-Anwendungen modernisieren, Plattformen nahtlos konsolidieren und Architekturen optimieren – und das alles bei gleichzeitiger Kostensenkung. Wir freuen uns, Accenture Federal Services als Hauptsponsor unseres ersten MongoDB Public Sector Summit im Januar 2026 begrüßen zu dürfen.</p>
<h2>Globaler Tech-Partner des Jahres: Confluent</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/confluent">Confluent</a> – die Daten-Streaming-Plattform, die von den Mitentwicklern von Apache Kafka® entwickelt wurde – ist nach wie vor ein strategischer Partner mit mehr als 550 gemeinsamen Kundenbereitstellungen, die weltweit branchenübergreifend Wirkung erzielen. Im vergangenen Jahr haben MongoDB und Confluent die globale Ausrichtung des Go-to-Market (GTM) gestärkt und sich auf die Beschleunigung des Co-Selling-Engagements in EMEA und APAC konzentriert.</p>
<p>Gemeinsam haben MongoDB und Confluent Schnellstarts für generative KI und No-Code-Streaming-Demos bereitgestellt und gemeinsam führende Ansätze zur agentenbasierten KI entwickelt, um Kunden zu helfen, Innovationen mit Daten in Bewegung zu beschleunigen und ereignisgesteuerte KI-Anwendungen zu erstellen. Unsere Partnerschaft basiert auf einer starken Zusammenarbeit vor Ort, mit fortlaufend gemeinsam gesponserten KI-Workshops und praktischen Entwicklerveranstaltungen Ein herausragendes Highlight unserer GTM-Zusammenarbeit war ein gemeinsamer Generative AI Developer Day mit Confluent und LangChain, bei dem KI-Führungskräfte über 80 Entwickler einluden, um zu demonstrieren, wie unsere kombinierten Plattformen kostengünstige, erklärbare und personalisierte Multi-Agenten-Systeme ermöglichen.</p>
<h2>Globaler ISV-Partner des Jahres: BigID</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/bigid-data-intelligence-platform">BigID</a> ist nach wie vor ein herausragender ISV-Partner für MongoDB und liefert durchweg starke Ergebnisse für Kunden aus den Bereichen Finanzdienstleistungen, Versicherungen und Gesundheitswesen. Gemeinsam haben wir wirkungsvolle gemeinsame GTM-Initiativen gestartet, von Kundenveranstaltungen bis hin zu maßgeschneiderten Anreizprogrammen, die Wachstumschancen beschleunigt haben. BigID gilt weiterhin als führend in den Bereichen Datensicherheit, Datenschutz und KI-Datenmanagement und stärkt dank unserer engen globalen Ausrichtung die Position von MongoDB als vertrauenswürdiger Partner für Unternehmen, die in stark regulierten Branchen tätig sind.</p>
<h2>Globaler KI-Tech-Partner des Jahres: LangChain</h2>
<p>Durch die Partnerschaft zwischen MongoDB und <a href="https://cloud.mongodb.com/ecosystem/langchain">LangChain</a> sind leistungsstarke neue Integrationen möglich geworden, die es Entwicklern erleichtern, Retrieval Augmented Generation (RAG)-Anwendungen und intelligente Agenten auf MongoDB zu erstellen.</p>
<p>Von der Hybridsuche und dem Abrufen übergeordneter Dokumente bis hin zu Kurz- und Langzeitgedächtnisfunktionen helfen diese gemeinsamen Lösungen Entwicklern, die Grenzen dessen, was mit KI möglich ist, zu erweitern. Durch gemeinsame Workshops, Webinare und praktische Schulungen haben wir Entwickler mit den Werkzeugen und dem Wissen ausgestattet, um diese Fähigkeiten zu skalieren. Die Dynamik nimmt weiterhin rasant zu, und die Akzeptanz der Pakete LangChain/MongoDB und LangGraph/MongoDB wächst weiter, was die Stärke unserer Zusammenarbeit und die florierende Entwicklerumgebung unterstreicht, die MongoDB und LangChain gemeinsam ermöglichen.</p>
<h2>Globaler KI-SI-Partner des Jahres: Pureinsights</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/pureinsights">Pureinsights</a> beschleunigt mit seiner leistungsstarken Discovery Platform die Entwicklung intelligenter Such- und KI-Anwendungen. Eine herausragende Funktion ist die Integration mit <a href="https://www.mongodb.com/products/platform/ai-search-and-retrieval">Voyage AI von MongoDB</a>, die erweiterte Einbettungen, multimodale Einbettungen und eine Neubewertung der Ergebnisse ermöglicht und Anerkennung für die überzeugende Erfolgsbilanz und den differenzierten Wert in Anwendungsfällen auf Unternehmensebene erhält. Mit dem Schwerpunkt auf der Implementierung von generative KI-, Vektorsuch- und Retrieval-Augmented-Generation-Anwendungsfällen ermöglicht Pureinsights seinen Kunden weiterhin, schnell, zuverlässig und im großen Umfang zu skalieren.</p>
<h2>Globaler Modernisierungspartner des Jahres: gravity9</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/gravity9">gravity9</a> hat sich als vertrauenswürdiger MongoDB-Partner etabliert, indem es durch Modernisierungs- und Jumpstart-Projekte in verschiedenen Branchen und Regionen, unterstützt durch KI, konsistente Wirkung erzielt. Als strategischer Implementierungspartner spezialisiert sich gravity9 auf die Konzeption und Bereitstellung cloud-nativer, skalierbarer Lösungen, die Organisationen dabei helfen, Legacy-Systeme zu modernisieren, neue Technologien einzuführen, die Time-to-Value zu beschleunigen und sich auf die Ära der KI vorzubereiten. Durch die Kombination von tiefgreifender technischer Expertise mit einem agilen Bereitstellungsmodell ermöglicht gravity9 Kunden, Transformationsmöglichkeiten zu erschließen, sei es durch die Verlagerung von Workloads in die Cloud, den Aufbau neuer KI-Erfahrungen oder die Optimierung bestehender Infrastruktur. Die enge Zusammenarbeit von gravity9 mit den Professional-Services-Teams von MongoDB hat zu konstant hohen Kundenbewertungen geführt, die die Qualität und Zuverlässigkeit ihrer Arbeit belegen.</p>
<h2>Globaler Impact-Partner des Jahres: IBM</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/mongodb-on-ibm-z-and-linuxone">IBM</a> wird für seine strategischen Beiträge bei einer Vielzahl großer, branchenführender Kunden mit der Auszeichnung „Impact Partner of the Year“ ausgezeichnet. IBM hat eine entscheidende Rolle bei der Sicherung großer Verträge mit mehreren multinationalen Finanzinstituten gespielt und investiert weiter in den weltweiten Ausbau der Partnerschaft. Die Partnerschaft wächst weiter, unter anderem mit Atlas &amp; Watsonx.ai, und es gibt eine steigende Zahl differenzierter Projekte auf der IBM-Z-Systems- oder LinuxOne-Infrastruktur. IBM ist ein vertrauenswürdiger Anbieter für große Unternehmen und strategischer Partner für über 25 % der größten Kunden von MongoDB.</p>
<h2>Globaler Cloud-zertifizierter DBaaS-Partner des Jahres: Alibaba</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/alibaba-cloud">Alibaba Cloud</a> hat sich als strategischer MongoDB Partner etabliert, indem es Innovationen mit ApsaraDB für MongoDB vorantreibt und KI nutzt, um Unternehmen bei der Entwicklung moderner Anwendungen zu unterstützen. Mit einem starken Fokus auf Schlüsselbranchen wie Gaming, Automobilindustrie, Einzelhandel und Fintech ermöglicht Alibaba Cloud Unternehmen eine schnellere Modernisierung und die Erschließung neuer Möglichkeiten in allen Branchen. Durch die Kombination hochmoderner Datenlösungen mit einer mutigen globalen Expansionsstrategie ermöglicht Alibaba Cloud Kunden weltweit, die Transformation zu beschleunigen, sei es durch die Skalierung digitaler Plattformen, die Bereitstellung neuer Kundenerfahrung oder die Optimierung unternehmenskritischer Workloads.</p>
<h2>Ausblick</h2>
<p>Herzlichen Glückwunsch an alle Gewinner des Global Partner Award 2025! Ihr Engagement für Innovation, Zusammenarbeit und Customer Success hat und wird einen nachhaltigen Einfluss auf Unternehmen auf der ganzen Welt haben. Diese Auszeichnungen würdigen nicht nur die Erfolge des vergangenen Jahres, sondern unterstreichen auch die Vision von MongoDB für das, was wir gemeinsam mit unseren Partnern in Zukunft gemeinsam aufbauen werden.</p>
<div class="callout">
<p><b>Um mehr über das MongoDB-<a href="https://www.mongodb.com/partners?tck=partner_awards_blog_2025">Partnerprogramm</a> zu erfahren, besuchen Sie bitte unsere Partnerseite.</b></p>
</div>	]]></description>
      <pubDate>Thu, 18 Sep 2025 00:59:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/news/celebrating-excellence-mongodb-global-partner-awards-2025-de</link>
      <guid>https://www.mongodb.com/company/blog/news/celebrating-excellence-mongodb-global-partner-awards-2025-de</guid>
    </item><item>
      <title>Celebrare l&#39;eccellenza: MongoDB Global Partner Awards 2025</title>
      <description><![CDATA[<p>In un mondo rimodellato dall'AI e dai rapidi cambiamenti tecnologici, una cosa è chiara: i nostri partner stanno alimentando il futuro con MongoDB. Insieme, aiutiamo i clienti a modernizzare i sistemi legacy, a risolvere le sfide dalla sicurezza ai vincoli di budget e a costruire la prossima ondata di applicazioni basate sull'AI.</p>
<p>Ecco perché siamo orgogliosi di annunciare l'edizione annuale dei MongoDB Global Partner Awards, che celebra i partner che hanno fatto da apripista nel 2025. Dalla guida nell'AI e nella modernizzazione, all'innovazione nel settore pubblico, fino alla creazione di audaci collaborazioni di go-to-market, questi partner stabiliscono lo standard di eccellenza. La loro leadership non si limita a spostare l'ago della bilancia, ma ridefinisce ciò che è possibile.</p>
<h2>Partner globale dell'anno per il cloud: Microsoft</h2>
<p>Siamo orgogliosi di riconoscere <a href="https://cloud.mongodb.com/ecosystem/microsoft-power-platform-mongodb-connector">Microsoft</a> per l'eccezionale crescita anno dopo anno come partner globale dell'anno per il cloud di MongoDB. Insieme, MongoDB e Microsoft hanno dato un forte impulso in settori come la sanità, le telecomunicazioni e i Financial Services, aiutando le organizzazioni a creare grandi applicazioni che offrono un'esperienza del cliente eccezionale.</p>
<p>Il profondo impegno di Microsoft verso la collaborazione, il successo del cliente e la leadership nel cloud lo rendono una parte indispensabile dell'ecosistema dei partner di MongoDB. La forza della partnership continua a crescere; infatti, MongoDB è stata recentemente scelta come partner Microsoft per l'iniziativa &quot;Unify your data solution play&quot;, che consente ai clienti di beneficiare delle integrazioni congiunte e delle risorse di go-to-market (GTM) tra <a href="https://www.mongodb.com/products/platform/atlas-database">MongoDB Atlas</a> su Azure e i servizi nativi Microsoft.</p>
<h2>Partner globale dell'anno per l'AI cloud: Amazon Web Services (AWS)</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/amazon-web-services">AWS</a> è stato una forza trainante nell'aiutare i clienti a liberare tutto il potenziale dell'AI con MongoDB, come dimostra il nostro lavoro con Novo Nordisk, che ha utilizzato Amazon Bedrock e MongoDB Atlas per creare una soluzione di AI capace di ridurre uno dei loro flussi di lavoro più impegnativi da 12 settimane a 10 minuti. La collaborazione con Novo Nordisk è solo uno dei tanti esempi che dimostrano il potere della nostra partnership nel creare differenziazione aziendale per i clienti nell'era della Generative AI.</p>
<p>MongoDB è stata anche un partner di lancio per la Generative AI Competency di AWS, rafforzando ulteriormente la nostra collaborazione nell'AI. Dai casi d'uso innovativi della Generative AI e oltre, la nostra partnership consente alle organizzazioni di muoversi più velocemente, innovare con maggiore audacia e trasformarsi con fiducia. Insieme, AWS e MongoDB stanno plasmando ciò che è possibile nell'era dell'AI.</p>
<h2>Partner globale dell'anno per il GTM del cloud: Google Cloud</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/google-cloud">Google Cloud</a> è stato premiato per aver accelerato i nuovi affari attraverso iniziative GTM congiunte di grande impatto. La partnership di MongoDB con Google Cloud ha stabilito lo standard per una collaborazione significativa, promuovendo nuovi affari e generando un impatto su alcune delle aziende globali più complesse al mondo. Il programma congiunto di Google Cloud e MongoDB Sales Development Representative è stato la pietra angolare di questo successo, garantendo ai talenti in fase iniziale l'opportunità di lavorare con le più grandi organizzazioni del mondo imparando al contempo una strategia di vendita che sarà loro utile per il resto della loro carriera. Google Cloud continua a essere una forza trainante nella crescita globale di MongoDB grazie al suo impegno congiunto in strategie GTM innovative.</p>
<h2>Partner integratore di sistemi globale dell'anno: Accenture</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/accenture-mainframe-modernization">Accenture</a> ha dimostrato un impegno eccezionale come partner integratore di sistemi globale, istituendo un centro di eccellenza dedicato per MongoDB all'interno della sua linea di servizi di ingegneria del software.</p>
<p>Insieme, MongoDB e Accenture hanno ottenuto risultati trasformativi per i clienti in vari settori, dalla modernizzazione dei pagamenti per una banca leader alla trasformazione dei dati per un importante produttore. Nel frattempo, una più stretta collaborazione con l'unità di business BFSI di Accenture ha continuato a stimolare il successo globale dei clienti. Combinando la moderna piattaforma di database di MongoDB con la profonda esperienza di settore di Accenture, la nostra partnership continua ad aiutare i clienti a modernizzare, sbloccare insight basati sui dati e accelerare la trasformazione digitale su scala aziendale.</p>
<h2>Partner globale dell'anno per il settore pubblico: Accenture Federal Services</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/accenture-federal-services">Accenture Federal Services</a> ha svolto un ruolo fondamentale nell'avanzare la presenza di MongoDB nel settore pubblico. Grazie alla sua scala, competenza e attenzione ai risultati dei clienti, ha favorito una notevole crescita anno dopo anno e ha supportato missioni governative critiche in coordinamento con MongoDB.</p>
<p>MongoDB e Accenture Federal Services stanno aiutando le agenzie governative a raggiungere i loro obiettivi di efficienza modernizzando le applicazioni legacy, consolidando senza soluzione di continuità le piattaforme e semplificando le architetture, il tutto riducendo i costi. Siamo entusiasti di avere Accenture Federal Services come sponsor chiave del nostro primo MongoDB Public Sector Summit, che si terrà nel gennaio 2026.</p>
<h2>Partner tecnologico globale dell'anno: Confluent</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/confluent">Confluent</a>, la piattaforma di streaming di dati creata dai co-creatori di Apache Kafka®, continua a essere un partner strategico con oltre 550 implementazioni congiunte con clienti che offrono un impatto in tutti i settori a livello mondiale. Nell'ultimo anno, MongoDB e Confluent hanno rafforzato l'allineamento globale del go-to-market (GTM), concentrandosi sull'accelerazione della collaborazione commerciale nelle regioni EMEA e APAC.</p>
<p>Insieme, MongoDB e Confluent hanno realizzato guide introduttive sulla Generative AI, demo di streaming senza codice e contenuti di leadership di pensiero sull'AI agentica per aiutare i clienti ad accelerare l'innovazione con i dati in movimento e creare applicazioni di AI basate sugli eventi. La nostra partnership è ancorata a una forte collaborazione sul campo, con workshop di AI co-sponsorizzati ed eventi pratici per sviluppatori. Un momento di rilievo della nostra collaborazione GTM è stata una giornata dedicata agli sviluppatori di Generative AI organizzata con Confluent e LangChain, durante la quale i leader dell'AI hanno coinvolto oltre 80 sviluppatori per mostrare come le nostre piattaforme combinate permettano di creare sistemi multi-agente convenienti, spiegabili e personalizzati.</p>
<h2>Partner ISV globale dell'anno: BigID</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/bigid-data-intelligence-platform">BigID</a> è rimasto un partner ISV di spicco per MongoDB, fornendo costantemente ottimi risultati per i clienti nei Financial Services, assicurativi e sanitari. Insieme, abbiamo lanciato iniziative GTM congiunte di grande impatto, dagli eventi per i clienti ai programmi di incentivazione su misura che hanno accelerato le opportunità di crescita. BigID continua a essere riconosciuto come leader nella sicurezza dei dati, nella privacy e nella gestione dei dati AI e, grazie al nostro stretto allineamento globale, sta rafforzando ulteriormente la posizione di MongoDB come partner di fiducia per le organizzazioni che operano in settori altamente regolamentati.</p>
<h2>Partner tecnologico globale dell'anno per l'AI: LangChain</h2>
<p>La partnership di MongoDB con <a href="https://cloud.mongodb.com/ecosystem/langchain">LangChain</a> ha sbloccato nuove potenti integrazioni che rendono più facile per gli sviluppatori creare applicazioni di RAG (Retrieval Augmented Generation) e agenti intelligenti su MongoDB.</p>
<p>Dalla ricerca ibrida e dai sistemi di recupero dei documenti principali alle funzionalità di memoria a breve e lungo termine, queste soluzioni integrate supportano gli sviluppatori nell'espandere le possibilità offerte dall'AI. Attraverso workshop congiunti, webinar e formazione pratica, abbiamo fornito agli sviluppatori gli strumenti e le conoscenze per adottare queste funzionalità su grande scala. Lo slancio continua a crescere rapidamente e l'adozione dei pacchetti LangChain/MongoDB e LangGraph/MongoDB continua a espandersi, evidenziando la solidità della nostra collaborazione e l'ecosistema di sviluppatori in piena crescita che MongoDB e LangChain stanno promuovendo insieme.</p>
<h2>Partner integratore di sistemi globale dell'anno per l'AI: Pureinsights</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/pureinsights">Pureinsights</a> accelera lo sviluppo di applicazioni di ricerca intelligente e AI grazie alla sua potente Discovery Platform. Una delle sue funzionalità più importanti è l'integrazione con <a href="https://www.mongodb.com/products/platform/ai-search-and-retrieval">Voyage AI di MongoDB</a>, che offre incorporamenti avanzati, incorporamenti multimodali e riclassificazione dei risultati, guadagnandosi il riconoscimento per i solidi casi di successo e il valore differenziato nei casi d'uso enterprise. Concentrandosi sull'implementazione di casi d'uso di Generative AI, ricerca vettoriale e RAG, Pureinsights continua a consentire ai clienti di innovare in modo rapido, affidabile e scalabile.</p>
<h2>Partner globale dell'anno per la modernizzazione: gravity9</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/gravity9">gravity9</a> si è affermato come partner di fiducia di MongoDB offrendo un impatto costante attraverso progetti di modernizzazione e avvio rapido in tutti i settori e le aree geografiche, basati sull'AI. In qualità di partner strategico per l'implementazione, gravity9 è specializzato nella progettazione e fornitura di soluzioni scalabili native per il cloud che aiutano le organizzazioni a modernizzare i sistemi legacy, adottare nuove tecnologie, accelerare il time-to-value e prepararsi per l'era dell'AI. Combinando una profonda esperienza tecnica con un modello di consegna agile, gravity9 consente ai clienti di sbloccare opportunità di trasformazione, che si tratti di spostare i carichi di lavoro sul cloud, creare nuove esperienze di AI o ottimizzare l'infrastruttura esistente. La stretta collaborazione di gravity9 con i team dei servizi professionali di MongoDB ha generato valutazioni dei clienti costantemente elevate, dimostrando la qualità e l'affidabilità del loro lavoro.</p>
<h2>Partner globale dell'anno per l'impatto: IBM</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/mongodb-on-ibm-z-and-linuxone">IBM</a> è stata premiata come partner dell'anno per l'impatto per i suoi contributi strategici a una varietà di grandi clienti leader del settore. IBM ha svolto un ruolo fondamentale nell'assicurarsi grandi contratti con diverse istituzioni finanziarie multinazionali e sta investendo di più nell'espansione della partnership a livello globale. La partnership continua a crescere, anche con Atlas &amp; Watsonx.ai, e un numero crescente di progetti differenziati sull'infrastruttura IBM Z Systems o LinuxOne. IBM è un fornitore di fiducia per le grandi aziende ed è un partner strategico per oltre il 25% dei maggiori clienti di MongoDB.</p>
<h2>Partner certificato DBaaS globale dell'anno per il cloud: Alibaba</h2>
<p><a href="https://cloud.mongodb.com/ecosystem/alibaba-cloud">Alibaba Cloud</a> si è affermato come partner strategico di MongoDB promuovendo l'innovazione con ApsaraDB per MongoDB e utilizzando l'AI per aiutare le organizzazioni a creare applicazioni moderne. Con una forte attenzione ai settori verticali chiave come il gaming, l'automotive, la vendita al dettaglio e il fintech, Alibaba Cloud consente alle aziende di modernizzarsi più velocemente e sbloccare nuove opportunità in tutti i settori. Combinando soluzioni di dati all'avanguardia con un'audace strategia di espansione globale, Alibaba Cloud consente ai clienti di tutto il mondo di accelerare la trasformazione, che si tratti di scalabilità delle piattaforme digitali, offrire nuove esperienze del cliente o ottimizzare i carichi di lavoro mission-critical.</p>
<h2>Uno sguardo al futuro</h2>
<p>Congratulazioni a tutti i vincitori dei Global Partner Awards 2025! Il loro impegno verso l'innovazione, la collaborazione e il successo del cliente ha avuto e avrà un impatto duraturo sulle organizzazioni di tutto il mondo. Questi premi non solo riconoscono i risultati conseguiti nell'anno passato, ma sottolineano anche la visione di MongoDB su ciò che noi, insieme ai nostri partner, costruiremo in futuro.</p>
<div class="callout">
<p><b>Per ulteriori informazioni sul <a href="https://www.mongodb.com/partners?tck=partner_awards_blog_2025">programma per i partner</a> di MongoDB, visita la nostra pagina dedicata.</b></p>
</div>	]]></description>
      <pubDate>Thu, 18 Sep 2025 00:59:00 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/news/celebrating-excellence-mongodb-global-partner-awards-2025-it</link>
      <guid>https://www.mongodb.com/company/blog/news/celebrating-excellence-mongodb-global-partner-awards-2025-it</guid>
    </item><item>
      <title>The Future of AI Software Development is Agentic</title>
      <description><![CDATA[<p>Today in New York, our flagship MongoDB.local event is bringing together thousands of developers and tech leaders to discuss the future of building with MongoDB. Among the many exciting innovations and product announcements shared during the event, one theme has stood out: empowering developers to reliably build with AI and create AI solutions at scale on MongoDB. This post will explore how these advancements are set to accelerate developer productivity in the AI era.</p>
<h2>Ship faster with the MongoDB MCP Server</h2>
<p>Software development is rapidly evolving with AI tools powered by <a href="https://www.mongodb.com/resources/basics/artificial-intelligence/large-language-models">large language models</a> (LLMs). From AI-driven editors like VS Code with GitHub Copilot and Windsurf, to terminal-based coding agents like Claude Code, these tools are transforming how developers work. While these tools bring tremendous productivity gains already, coding agents are still limited by the context they have. Since databases hold the core of most application-related data, access to configuration details, schemas, and sample data from databases is essential for generating accurate code and optimized queries.</p>
<p>With Anthropic’s introduction of the Model Context Protocol (MCP) in November 2024, a new way emerged to connect AI agents with data sources and services. Database connection and interaction quickly became one of the most popular use cases for MCP in agentic coding.</p>
<p>Today, we’re excited to announce the general availability (GA) of the MongoDB MCP Server, giving AI assistants and agents access to the context they need to explore, manage, and generate better code with MongoDB. Building on <a href="https://www.mongodb.com/company/blog/announcing-mongodb-mcp-server">our public preview</a> used by thousands of developers, the GA release introduces key capabilities to strengthen production readiness:</p>
<ul>
	<font size="4">
		<li>Enterprise-grade authentication (OIDC, LDAP, Kerberos) and proxy connectivity.</li>
<li>Self-hosted remote deployment support, enabling shared deployments across teams, streamlined setup, and centralized configuration. Note that we recommend following <a href="https://www.mongodb.com/docs/mcp-server/security-best-practices/">security best practices</a>, such as implementing authentication for remote deployments.</li>
<li>Accessible as a bundle with the <a href="https://www.mongodb.com/products/tools/vs-code">MongoDB for VS Code extension</a>, it delivers a complete experience:  visually explore your database with the extension or interact with the same connection through your AI assistant, all without switching context.</li>
	</font>
</ul>	
<center><caption><b>Figure 1.</b> Overview of the MongoDB MCP Server.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-09-15 at 7.39.14 AM-669uf6pxwb.png" alt="This architecture diagram begins on the left with a box titled host with MCP client, which contains the logos for claude, cursor, visual studio code, and windsurf. Going down from this box, these clients connect back and forth with the LLM. To the right, the clients send data back and forth with MongoDB MCP Server. MCP Server then sends data over to the right for Atlas operations and database operations. At the bottom of the diagram is a line that says you can deploy locally or remotely (self-hosted). " title=" " style="width: 800px"/>
</div>
</figure>
<h2>Meeting developers where they are with n8n and CrewAI integrations</h2>
<p>AI is transforming how developers build with MongoDB, not just in coding workflows, but also in creating AI applications and agents. From <a href="https://www.mongodb.com/resources/basics/artificial-intelligence/retrieval-augmented-generation">retrieval-augmented generation</a> (RAG) to powering agent memory, these systems demand a database that can handle diverse data types—such as unstructured text (e.g., messages, code, documents), vectors, and graphs—all while supporting comprehensive retrieval mechanisms at scale like vector and hybrid search. MongoDB delivers this in a single, unified platform: the flexible document model supports the varied data agents need to store, while advanced, natively integrated search capabilities eliminate the need for separate vector databases. With <a href="https://www.mongodb.com/products/platform/ai-search-and-retrieval">Voyage AI by MongoDB</a> providing state-of-the-art embedding models and rerankers, developers get a complete foundation for building intelligent agents without added infrastructure complexity.</p>
<p>As part of our commitment to making MongoDB as easy to use as possible, we’re excited to announce new integrations with <a href="https://n8n.io/" target="_blank">n8n</a> and <a href="https://www.crewai.com/" target="_blank">CrewAI</a>.</p>
<p>n8n has emerged as one of the most popular platforms for building AI solutions, thanks to its visual interface and out-of-the-box components that make it simple and accessible to create reliable AI workflows. This integration adds official support for <a href="https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.vectorstoremongodbatlas/" target="_blank">MongoDB Atlas Vector Search</a>, enabling developers to build RAG and agentic RAG systems through a flexible, visual interface. It also introduces an <a href="https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.memorymongochat/">agent chat memory node for n8n</a> agents, allowing conversations to persist by storing message history in MongoDB.</p>
<center><caption><b>Figure 2.</b> Example workflow with n8n and MongoDB powering an AI agent.</center></caption>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-09-15 at 7.47.47 AM-dvrl0z7d0x.png" alt="Screen grab of an example workflow in n8n." title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<p>Meanwhile, CrewAI—a fast-growing open-source framework for building and orchestrating AI agents—makes multi-agent collaboration more accessible to developers. As AI agents take on increasingly complex and productive workflows such as online research, report writing, and enterprise document analysis, multiple specialized agents need to interact and delegate tasks with each other effectively. CrewAI provides an easy and approachable way to build such multi-agent systems. Our official integration adds support for <a href="https://docs.crewai.com/en/tools/database-data/mongodbvectorsearchtool#mongodb-vector-search-tool" target="_blank">MongoDB Atlas Vector Search</a>, empowering developers to build agents that leverage RAG at scale. <a href="https://www.mongodb.com/docs/atlas/ai-integrations/crewai/build-agents/">Learn how</a> to implement agentic RAG with MongoDB Atlas and CrewAI.</p>
<h2>The future is agentic</h2>
<p>AI is fundamentally reshaping the entire software development lifecycle, including for developers building with MongoDB. New technology like the MongoDB MCP Server is paving the way for database-aware agentic coding, representing the future of software development. At the same time, we’re committed to meeting developers where they are: integrating our capabilities into their favorite frameworks and tools so they can benefit from MongoDB’s reliability and scalability to build AI apps and agents with ease.</p>
<div class="callout">
<p><b>Start building your applications with the MongoDB MCP Server today by <a href="https://www.mongodb.com/docs/mcp-server/get-started/">following the Get Started guide</a>.</b></p>
<p><b>Visit the <a href="https://www.mongodb.com/resources/use-cases/artificial-intelligence">AI Learning Hub</a> to learn more about building AI applications with MongoDB.</b></p>
</div>	]]></description>
      <pubDate>Wed, 17 Sep 2025 14:25:30 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/product-release-announcements/future-of-ai-software-development-is-agentic</link>
      <guid>https://www.mongodb.com/company/blog/product-release-announcements/future-of-ai-software-development-is-agentic</guid>
    </item><item>
      <title>MongoDB Queryable Encryption Expands Search Power</title>
      <description><![CDATA[<p>Today, MongoDB is expanding the power of <a href="https://www.mongodb.com/products/capabilities/security/encryption">Queryable Encryption</a> by introducing support for prefix, suffix, and substring queries. Now in public preview, these capabilities extend the technology beyond equality and range queries, unlocking broader use cases for secure, expressive search on encrypted data.</p>
<p>Developed by the <a href="https://www.mongodb.com/company/research/cryptography-research-group">MongoDB Cryptography Research Group</a>, Queryable Encryption is a groundbreaking, industry-first in use encryption technology. It enables customers to encrypt sensitive application data, store it in encrypted form in the MongoDB database, and perform expressive queries directly on that encrypted data.</p>
<p>This release provides organizations with the tools to perform flexible text searches on encrypted data, such as matching partial names, keywords, or identifiers, without ever exposing the underlying information. This helps strengthen data protection, simplify compliance, and remove the need for complex workarounds such as external search indexes, all without any changes to the application code.</p>
<p>With support for prefix, suffix, and substring queries, Queryable Encryption enables organizations to protect sensitive data throughout its lifecycle: at rest, in transit, and in use. As a result, teams can build secure, privacy-preserving applications without compromising functionality or performance. Queryable Encryption is available at no additional cost in <a href="https://www.mongodb.com/atlas">MongoDB Atlas</a>, <a href="https://www.mongodb.com/products/self-managed/enterprise-advanced">Enterprise Advanced</a>, and <a href="https://www.mongodb.com/products/self-managed/community-edition">Community Edition</a>.</p>
<h2>Encryption: Securing data across its lifecycle</h2>
<p>Many organizations must store and search sensitive data, such as personally identifiable information (PII) like names, Social Security numbers, or medical details, to power their applications. Implementing this securely presents real challenges. Encrypting data at rest and in transit is widely adopted and table stakes. However, encrypting data while it is actively being used, known as encryption in use, has historically been much harder to realize.</p>
<p>The dilemma is that traditional encryption makes data unreadable, preventing databases from running queries without first decrypting it. For instance, a healthcare provider may need to find all patients with diagnoses that include the word “diabetes.” However, without decrypting the medical records, the database cannot search for that term.</p>
<p>To work around this, many organizations either leave sensitive fields unencrypted or use complex and less secure workarounds, such as building separate search indexes. Both approaches add operational overhead and increase the risk of unauthorized access. They also make it harder to comply with regulations like the <a href="https://www.hhs.gov/hipaa/index.html" target="_blank">Health Insurance Portability and Accountability Act</a> (HIPAA), <a href="https://www.pcisecuritystandards.org/standards/" target="_blank">Payment Card Industry Data Security Standard</a> (PCI-DSS), or <a href="https://gdpr-info.eu/" target="_blank">General Data Protection Regulation</a> (GDPR), where violations can carry significant fines.</p>
<p>To fully protect sensitive data and meet compliance requirements, organizations need the ability to encrypt data in use, in transit, and at rest without compromising operational efficiency.</p>
<h2>Building secure applications with fewer tradeoffs</h2>
<p>MongoDB Queryable Encryption solves this quandary. It protects sensitive data while eliminating the tradeoff between security and development velocity.  Organizations can encrypt sensitive data, such as personally identifiable information (PII) or protected health information (PHI), while still running queries directly on that data without exposing it to the database server.</p>
<p>With support for prefix, suffix, and substring queries (in public preview), Queryable Encryption enables MongoDB applications to encrypt sensitive fields such as names, email addresses, notes, and ID numbers while still performing native partial-match searches on encrypted data. This eliminates the impasse between protecting sensitive information and enabling essential application functionality.</p>
<p>For business leaders, Queryable Encryption strengthens data protection, supports compliance requirements, and reduces the risk of data exposure. This helps safeguard reputation, avoid costly fines, and eliminate the need for complex third-party solutions. For developers, advanced encrypted search is built directly into MongoDB’s query language. This eliminates the need for code changes, external indexes, or client-side workarounds while simplifying architectures and reducing overhead.</p>
<p>Some examples of what organizations can now achieve:</p>
<ul>
	<font size="4">
		<li><b>PII Search for compliance and usability:</b> Regulations such as GDPR and HIPAA mandate strict privacy of personal information. With prefix queries, teams can retrieve users by last name or email prefix while ensuring the underlying data remains encrypted. This makes compliance easier without reducing search functionality.</li>
		<li><b>Keyword filtering in support workflows:</b> Customer service notes often contain sensitive details in free-text fields. With substring query support, teams can search encrypted notes for specific keywords, e.g. “refund,” “escalation,” or “urgent”. This is possible without exposing the contents of those notes.</li>
		<li><b>Secure ID validation:</b> Identity workflows often rely on partial identifiers such as the last digits of a Social Security Number in the U.S., a National Insurance Number in the UK, or an Aadhaar Number in India. Suffix queries enable these lookups on encrypted fields without revealing full values. This reduces the risk of data leaks in regulated environments.</li>
		<li><b>Case management for public agencies:</b> Case numbers and reference IDs in public sector applications often follow structured formats. Now agencies can securely retrieve records using a prefix query based on region- or office-based prefixes without exposing sensitive case metadata, e.g. “NYC-” or “EUR-”.</li>
	</font>
</ul>
<p><b><i>Note:</b></i> This functionality is in public preview. Therefore, MongoDB recommends that these new Queryable Encryption features not be used for production workloads until they are generally available in 2026. MongoDB wants to build and improve Queryable Encryption with customer needs and use cases in mind. As General Availability approaches, customers are encouraged to contact their account team or share feedback through the <a href="https://feedback.mongodb.com/" target="_blank">MongoDB Feedback Engine</a>.</p>
<h2>Robust data protection at every stage</h2>
<p>MongoDB offers unmatched protection for sensitive data throughout its entire lifecycle with Queryable Encryption. This includes data in transit, at rest, or in use. With the addition of prefix, suffix, and substring query support, Queryable Encryption meets even more of the demands of modern applications, unlocking new use cases.</p>
<div class="callout">
<p><b>To learn more about Queryable Encryption and how it works, explore the <a href="https://www.mongodb.com/docs/upcoming/core/queryable-encryption/features/#queryable-encryption-features">features documentation page</a>. To get started using Queryable Encryption, read the <a href="https://www.mongodb.com/docs/upcoming/core/queryable-encryption/quick-start/#queryable-encryption-quick-start">Quick Start Guide</a>.</b></p>
</div>	]]></description>
      <pubDate>Wed, 17 Sep 2025 14:25:25 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/product-release-announcements/queryable-encryption-expands-search-power</link>
      <guid>https://www.mongodb.com/company/blog/product-release-announcements/queryable-encryption-expands-search-power</guid>
    </item><item>
      <title>Supercharge Self-Managed Apps With Search and Vector Search Capabilities</title>
      <description><![CDATA[<p>MongoDB is excited to announce the public preview of search and vector search capabilities for use with MongoDB Community Edition and MongoDB Enterprise Server. These new capabilities empower developers to prototype, iterate, and build sophisticated, AI-powered applications directly in self-managed environments with robust search functionality.</p>
<div class="callout">
<p><b>This post is also available in: <a href="https://www.mongodb.com/company/blog/product-release-announcements/supercharge-self-managed-apps-search-vector-search-capabilities-de" target="_blank">Deutsch</a>, <a href="https://www.mongodb.com/company/blog/product-release-announcements/supercharge-self-managed-apps-search-vector-search-capabilities-fr" target="_blank">Français</a>, <a href="https://www.mongodb.com/company/blog/product-release-announcements/supercharge-self-managed-apps-search-vector-search-capabilities-es" target="_blank">Español</a>, <a href="https://www.mongodb.com/company/blog/product-release-announcements/supercharge-self-managed-apps-search-vector-search-capabilities-br" target="_blank">Português</a>, <a href="https://www.mongodb.com/company/blog/product-release-announcements/supercharge-self-managed-apps-search-vector-search-capabilities-it" target="_blank">Italiano</a>, <a href="https://www.mongodb.com/company/blog/product-release-announcements/supercharge-self-managed-apps-search-vector-search-capabilities-kr" target="_blank">한국어</a>, <a href="https://www.mongodb.com/company/blog/product-release-announcements/supercharge-self-managed-apps-search-vector-search-capabilities-cn" target="_blank">简体中文</a>.</b></p>
</div>	
<p>Versatility is one of the reasons why developers love MongoDB. MongoDB can run anywhere.<sup>1</sup> This includes local setups where many developers kickstart their MongoDB journey, to the largest enterprise data centers when it is time to scale, and MongoDB’s fully managed cloud service, <a href="https://www.mongodb.com/products/platform/atlas-database">MongoDB Atlas</a>. Regardless of where development takes place, MongoDB effortlessly integrates with any developer's workflow.</p>
<p><a href="https://www.mongodb.com/products/self-managed/community-edition">MongoDB Community Edition</a> is the free, source-available version of MongoDB that millions of developers use to learn, test, and grow their skills. <a href="https://www.mongodb.com/try/download/enterprise">MongoDB Enterprise Server</a> is the commercial version of MongoDB’s core database. It offers additional enterprise-grade features for companies that prefer to self-manage their deployments on-premises or in public, private, or hybrid cloud environments.</p>
<p>With native search and vector search capabilities now available for use with Community Edition and Enterprise Server, MongoDB aims to deliver a simpler and consistent experience for building great applications wherever they are deployed.</p>
<h2>What is search and vector search?</h2>
<p>Similar to the offerings in MongoDB Atlas, MongoDB Community Edition and MongoDB Enterprise Server now support two distinct yet complementary search capabilities:</p>
<ul>
	<font size="4">
<li><b><a href="https://www.mongodb.com/resources/basics/full-text-search">Full-text search</a></b> is an embedded capability that delivers a seamless, scalable experience for building relevance-based app features.</li>
<li><b><a href="https://www.mongodb.com/resources/basics/vector-search">Vector search</a></b> enables developers to build intelligent applications powered by semantic search and generative AI using native, full-featured vector database capabilities.</li>
	</font>
</ul>
<p>There are no functional limitations on the core search aggregation stages in this public preview. Therefore, <code tabindex="0">$search</code>, <code tabindex="0">$searchMeta</code>, and <code tabindex="0">$vectorSearch</code> are all supported with functional parity to what is available in Atlas, excluding features in a preview state. For more information, check out the <a href="https://www.mongodb.com/docs/atlas/atlas-search/">search</a> and <a href="https://www.mongodb.com/docs/atlas/atlas-vector-search/vector-search-overview/">vector search</a> documentation pages.</p>
<h2>Solving developer challenges with integrated search</h2>
<p>Historically, integrating advanced search features into self-managed applications often required bolting on external search engines or vector databases to MongoDB. This approach created friction at every stage for developers and organizations, leading to:</p>
<ul>
	<font size="4">
		<li><b>Architectural complexity:</b> Managing and synchronizing data across multiple, disparate systems added layers of complexity, demanded additional skills, and complicated development workflows.</li>
		<li><b>Operational overhead:</b> Handling separate provisioning, security, upgrades, and monitoring for each system placed a heavy load on DevOps teams.</li>
		<li><b>Decreased developer productivity:</b> Developers are forced to learn and use different query APIs and languages for both the database and the search engine. This resulted in frequent context switching, steeper learning curves, and slower release cycles.</li>
		<li><b>Consistency challenges:</b> Aligning the primary database with separate search or vector indexes risked producing out-of-sync results. Despite promotions of transactional guarantees and data consistency, these indexes were only eventually consistent. This led to incomplete results in rapidly changing environments.	</li>
	</font>
</ul>	
<p>With search and vector search now integrated into MongoDB Community Edition and MongoDB Enterprise Server, these trade–offs disappear. Developers can now create powerful search capabilities using MongoDB's familiar query framework, removing the synchronization burden and the need to manage multiple single-purpose systems. This release simplifies data architecture, reduces operational overhead, and accelerates application development.</p>
<p>With these capabilities, developers can harness sophisticated out-of-the-box capabilities to build a variety of powerful applications. Potential use cases include:</p>
<html>
<head>
     <style>
    table,
    th,
    td {
        border: 1px solid black;
        border-collapse: collapse;
    }
    th,
    td {
        padding: 5px;
    }
    </style>
</head>
<body>
    <table style="width:100%">
        <tr>
            <th></th>
					<th><b>Use Case</b></th>
					<th><b>Description</b></th>
        </tr>
        <tr>
					<td rowspan="3"><b>Keyword/Full-text search</b></td>
					<td>Autocomplete and fuzzy search </td>
					<td>Create real-time suggestions and correct spelling errors as users type, improving the search experience</td>
        </tr>
        <tr>
            <td>Search faceting </td>
					<td>Apply quick filtering options in applications like e-commerce, so users can narrow down search results based on categories, price ranges, and more</td>
        </tr>
        <tr>
            <td>Internal search tools </td>
					<td>Build search tools for internal use or for applications with sensitive data that require on-premises deployment</td>
        </tr>
			<tr>
				<td rowspan="3"><b>Vector search</b></td>
				<td>AI-powered semantic search </td>
				<td>Implement semantic search and recommendation systems to provide more relevant results than traditional keyword matching</td>
			</tr>
			<tr>
				<td>Retrieval-augmented generation (RAG)</td>
				<td>Use search to retrieve factual data from a knowledge base to bring accurate, context-aware data into large language model (LLM) applications</td>
			</tr>
			<tr>
				<td>AI agents </td>
				<td>Create agents that utilize tools to collect context, communicate with external systems, and execute actions</td>
			</tr>
			<tr>
				<td><b>Hybrid search</b></td>
				<td>Hybrid search </td>
				<td>Combine keyword and vector search techniques </td>
			</tr>
			<tr>
				<td><b>Data processing</b></td>
				<td>Text analysis </td>
				<td>Perform text analysis directly in the MongoDB database</td>
				<tr>
    </table>
</body>
</html>
<p>MongoDB offers native integrations with frameworks such as <a href="https://www.mongodb.com/docs/atlas/ai-integrations/langchain/#std-label-langchain">LangChain</a>, <a href="https://www.mongodb.com/docs/atlas/ai-integrations/langgraph/#std-label-langgraph">LangGraph</a>, and <a href="https://www.mongodb.com/docs/atlas/ai-integrations/llamaindex/#std-label-llamaindex">LlamaIndex</a>. This streamlines workflows, accelerates development, and embeds RAG or agentic features directly into applications. To learn more about other AI frameworks supported by MongoDB, check out this <a href="https://www.mongodb.com/docs/atlas/ai-integrations/">documentation</a>.</p>
<p><b>MongoDB’s partners and champions are already experiencing the benefits from utilizing search and vector search across a wider range of environments:</b></p>
<p>“We’re thrilled that MongoDB search and vector search are now accessible in the already popular MongoDB Community Edition. Now our customers can leverage MongoDB and LangChain in either deployment mode and in their preferred environment to build cutting-edge LLM applications.”—Harrison Chase, CEO, LangChain.</p>
<p>“MongoDB has helped Clarifresh build awesome software, and I’ve always been impressed with its rock-solid foundations. With search and vector search capabilities now available in MongoDB Community Edition, we gain the confidence of accessible source code, the flexibility to deploy anywhere, and the promise of community-driven extensibility. It’s an exciting milestone that reaffirms MongoDB’s commitment to developers.”—Luke Thompson, MongoDB Champion, Clarifresh.</p>
<p>“We’re excited about the next interaction of search experiences in MongoDB Community Edition. Our customers want the highest flexibility to be able to run their search and gen AI-enabled applications, and bringing this functionality to Community unlocks a whole new way to build and test anywhere.”—Jerry Liu, CEO, LlamaIndex.</p>
<p>“Participating in the Private Preview of Full-text and Vector Search for MongoDB Community has been an exciting opportunity. Having $search, $searchMeta, and $vectorSearch directly in Community Edition brings the same powerful capabilities we use in Atlas—without additional systems or integrations. Even in early preview, it’s already streamlining workflows and producing faster, more relevant results.”—Michael Höller, MongoDB Champion, akazia Consulting.</p>
<h2>Accessing the public preview</h2>
<p>The public preview is available for free and is intended for testing, evaluation, and feedback purposes only.</p>
<p><b>Search and Vector Search with MongoDB Community Edition.</b> The new capabilities are compatible with MongoDB version 8.2+, and operate on a separate binary, mongot, which interacts with the standard mongodb database binary.</p>
<p>To get started, ensure that:</p>
<ul>
	<font size="4">
<li>A MongoDB Community Server cluster is running using one of the following three methods:</li>
<ul style="list-style-type: lower-alpha; padding-bottom: 0;">
  <li style="margin-left:2em">Download MongoDB Community Server version 8.2 from the <a href="https://www.mongodb.com/try/download/community">MongoDB Downloads page</a>. As of public preview, this feature is available for self-managed deployments on supported Linux distributions and architectures for MongoDB Community Edition version 8.2+.</li>
  <li style="margin-left:6em; padding-bottom: 0;">Download the ```mongot``` binary from the <a href="https://www.mongodb.com/try/download/search-in-community">MongoDB Downloads page</a>.</li>
	<li style="margin-left:2em; padding-bottom: 0;">Pull the container image for Community Server 8.2 from a public <a href="https://hub.docker.com/r/mongodb/mongodb-community-server" target="_blank">Docker Hub repository</a>.</li>
	<li style="margin-left:2em; padding-bottom: 0;"><b><i>Coming soon:</b></i> Deploy using the MongoDB Controllers for Kubernetes Operator (Search Support for Community Server is planned for <a href="https://www.mongodb.com/docs/kubernetes/current/release-notes/">version 1.5+</a>).</li>
 </ul>
</li>
	</font>
</ul>
<p><b>Search and Vector Search for use with MongoDB Enterprise Server</b>. The new capabilities are deployed as self-managed search nodes in a customer's Kubernetes environment. This will seamlessly connect to any MongoDB Enterprise Server clusters, residing inside or outside Kubernetes itself.</p>
<p>To get started, ensure that:</p>
<ul>
	<font size="4">
<li>A MongoDB Enterprise Server cluster is running.</li>
<ul style="list-style-type: lower-alpha; padding-bottom: 0;">
  <li style="margin-left:2em">version 8.0.10+ (for MongoDB Controllers for Kubernetes operator 1.4).</li>
  <li style="margin-left:2em; padding-bottom: 0;">version 8.2+ (for MongoDB Controllers for Kubernetes operator 1.5+).</li>
		</ul>
		</li>
	<li>A Kubernetes environment.</li>
	<li>The MongoDB Controllers for Kubernetes Operator are installed in the Kubernetes cluster. Find installation instructions <a href="https://www.mongodb.com/docs/kubernetes/current/">here</a>.</li>
	</font>
</ul>
<p>Comprehensive documentation for setup for <a href="https://www.mongodb.com/docs/manual/installation/">MongoDB Community Edition</a> and <a href="https://www.mongodb.com/docs/kubernetes/current/fts-vs-deployment/">MongoDB Enterprise Server</a> is also available.</p>
<h2>What's next?</h2>
<p>During the public preview, MongoDB will deliver additional updates and roadmap features based on customer feedback. After the public preview, these search and vector search capabilities are anticipated to be generally available for use with on-premise deployments. For Community Edition, these capabilities will be available at no additional cost as part of the <a href="https://www.mongodb.com/legal/licensing/server-side-public-license">Server Side Public License (SSPL)</a>.</p>
<p>For MongoDB Enterprise Server, these capabilities will be included in a new paid subscription offering that will launch in the future. Pricing and packaging details for the subscription will be available closer to launch. For developers seeking a fully managed experience in the cloud, <a href="https://www.mongodb.com/products/platform/atlas-database">MongoDB Atlas</a> offers a production-ready version of these capabilities today.</p>
<p>MongoDB would love to hear feedback! Suggest new features or vote on existing ideas at <a href="http://feedback.mongodb.com" target="_blank">feedback.mongodb.com</a>. The input is critical for shaping the future of this product. Users can contact their MongoDB account team to provide more comprehensive feedback.</p>
<div class="callout">
<p><b>Check out MongoDB’s documentation to learn how to get started with Search and Vector Search in <a href="https://www.mongodb.com/docs/atlas/atlas-search/tutorial/">MongoDB Community Edition</a> and <a href="https://www.mongodb.com/docs/kubernetes/current/fts-vs-deployment/">MongoDB Enterprise Server</a>.</b></p>
</div>	
<hr>
<p><small><sup>1</sup> MongoDB can be deployed as a fully managed multi-cloud service across all major public cloud providers, in private clouds, locally, on-premises and hybrid environments.</small></p>
]]></description>
      <pubDate>Wed, 17 Sep 2025 13:04:46 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/product-release-announcements/supercharge-self-managed-apps-search-vector-search-capabilities</link>
      <guid>https://www.mongodb.com/company/blog/product-release-announcements/supercharge-self-managed-apps-search-vector-search-capabilities</guid>
    </item><item>
      <title>Potencie las aplicaciones autogestionadas con capacidades de búsqueda y búsqueda vectorial</title>
      <description><![CDATA[<p>MongoDB se complace en anunciar la vista previa pública de las capacidades de búsqueda y búsqueda vectorial para su uso con MongoDB Community Edition y MongoDB Enterprise Server. Estas nuevas capacidades permiten a los desarrolladores crear prototipos, iterar y compilar aplicaciones sofisticadas impulsadas por IA directamente en entornos autogestionados con una sólida funcionalidad de búsqueda.</p>
<p>La versatilidad es una de las razones por las que los desarrolladores aman MongoDB. MongoDB puede ejecutarse en cualquier lugar.<sup>1</sup> Esto incluye configuraciones locales donde muchos desarrolladores inician su viaje con MongoDB, hasta los centros de datos de empresas más grandes cuando es momento de escalar, y el servicio de cloud totalmente gestionado de MongoDB, <a href="https://www.mongodb.com/products/platform/atlas-database">MongoDB Atlas</a>. Independientemente de dónde se lleve a cabo el desarrollo, MongoDB se integra sin esfuerzo con el flujo de trabajo de cualquier desarrollador.</p>
<p><a href="https://www.mongodb.com/products/self-managed/community-edition">MongoDB Community Edition</a> es la versión gratuita y con código fuente disponible de MongoDB que millones de desarrolladores utilizan para aprender, probar y mejorar sus habilidades. <a href="https://www.mongodb.com/try/download/enterprise">MongoDB Enterprise Server</a> es la versión comercial de la base de datos principal de MongoDB. Ofrece características adicionales de nivel empresarial para empresas que prefieren autogestionar sus implementaciones on-premises o en entornos de nube pública, privada o híbrida.</p>
<p>Con las capacidades de búsqueda nativa y búsqueda vectorial ahora disponibles para su uso con Community Edition y Enterprise Server, MongoDB tiene como objetivo ofrecer una experiencia más sencilla y coherente para crear excelentes aplicaciones dondequiera que se implementen.</p>
<h2>¿Qué es la búsqueda y la búsqueda vectorial?</h2>
<p>Similar a las ofertas en MongoDB Atlas, MongoDB Community Edition y MongoDB Enterprise Server ahora tienen soporte para dos capacidades de búsqueda distintas pero complementarias:</p>
<ul>
	<font size="4">
		<li><a href="https://www.mongodb.com/resources/basics/full-text-search">La búsqueda de texto</a> completo es una capacidad integrada que ofrece una experiencia sin interrupciones y escalable para desarrollar características de aplicación basadas en la relevancia</li>
		<li><a href="https://www.mongodb.com/resources/basics/vector-search">La búsqueda vectorial</a> permite a los desarrolladores compilar aplicaciones inteligentes impulsadas por búsqueda semántica e IA generativa utilizando capacidades nativas y completas de base de datos vectorial.</li>
	</font>
</ul>
<p>No existen limitaciones funcionales en las etapas centrales de agregación de búsqueda en esta vista previa pública. Por lo tanto, $search, $searchMeta y $vectorSearch son compatibles con la misma funcionalidad que lo disponible en Atlas, excluyendo las características en estado de vista previa. Para obtener más información, consulte las páginas de documentación de <a href="https://www.mongodb.com/docs/atlas/atlas-search/">búsqueda</a> y <a href="https://www.mongodb.com/docs/atlas/atlas-vector-search/vector-search-overview/">búsqueda vectorial</a>.</p>
<h2>Resolviendo los desafíos de los desarrolladores con la búsqueda integrada.</h2>
<p>Históricamente, integrar características de búsqueda avanzada en aplicaciones autogestionadas a menudo requería añadir motores de búsqueda externos o bases de datos vectoriales a MongoDB. Este enfoque generó fricción en cada etapa para los desarrolladores y las organizaciones, lo que condujo a:</p>
<ul>
	<font size="4">
		<li><b>Complejidad arquitectónica:</b> La gestión y sincronización de datos a través de múltiples sistemas dispares añadió capas de complejidad, exigió habilidades adicionales y complicó los flujos de trabajo de desarrollo.</li>
		<li><b>Gastos en general operativos:</b> Manejar el Provisionamineto, la seguridad, las actualizar y la supervisión por separado para cada sistema supuso una gran carga para los equipo de DevOps.</li>
		<li><b>Disminución de la productividad de los desarrolladores:</b> los desarrolladores se ven obligados a aprender y utilizar diferentes API de query y lenguajes tanto para la base de datos como para el motor de búsqueda. Esto resultó en cambios frecuentes de contexto, curvas de aprendizaje más pronunciadas y ciclos de lanzamiento más lentos.</li>
		<li><b>Desafíos de coherencia:</b> Alinear la base de datos Primaria con índices de búsqueda o vectoriales independientes conlleva el riesgo de producir resultados desincronizados. A pesar de las promociones de garantías transaccionales y coherencia de datos, estos índices solo eran eventualmente coherentes. Esto condujo a resultados incompletos en entornos de rápida evolución.</li>
	</font>
</ul>	
<p>Con la búsqueda y la búsqueda vectorial ahora integradas en MongoDB Community Edition y MongoDB Enterprise Server, estas compensaciones desaparecen. Los desarrolladores ahora pueden crear potentes capacidades de búsqueda utilizando la conocida estructura del query de MongoDB, removiendo la carga de la sincronización y la necesidad de gestionar múltiples sistemas de un solo propósito. Esta versión simplifica la arquitectura de datos, reduce los gastos en general operativos y acelera el desarrollo de aplicaciones.</p>
<p>Con estas capacidades, los desarrolladores pueden aprovechar sofisticadas capacidades listas para usar para compilar una variedad de potentes aplicaciones. Los posibles casos de uso incluyen:</p>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-09-30 at 10.16.58 AM-jmzol0lxhc.png" alt=" " title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<p>MongoDB ofrece integraciones nativas con frameworks como <a href="https://www.mongodb.com/docs/atlas/ai-integrations/langchain/#std-label-langchain">LangChain</a>, <a href="https://www.mongodb.com/docs/atlas/ai-integrations/langgraph/#std-label-langgraph">LangGraph</a> y <a href="https://www.mongodb.com/docs/atlas/ai-integrations/llamaindex/#std-label-llamaindex">LlamaIndex</a>. Esto agiliza los flujos de trabajo, acelera el desarrollo e integra características RAG o de agentes directamente en las aplicaciones. Para aprender más sobre otros marcos de IA compatibles con MongoDB, consulte esta <a href="https://www.mongodb.com/docs/atlas/ai-integrations/">documentación</a>.</p>
<p><b>Los Emparejar y defensores de MongoDB ya están experimentando los beneficios de utilizar la búsqueda y la búsqueda vectorial en un rango más amplio de entornos:</b></p>
<p>“Estamos encantados de que ahora se pueda acceder a la búsqueda de MongoDB y a la búsqueda vectorial en la ya popular MongoDB Community Edition. Ahora nuestros clientes pueden aprovechar MongoDB y LangChain en cualquier modo de implementación y en su entorno preferido para compilar aplicaciones de LLM de vanguardia”. —Harrison Chase, CEO de LangChain.</p>
<p>“MongoDB ha ayudado a Clarifresh a compilar un software increíble, y siempre me ha impresionado su base sólida como una roca Con las capacidades de búsqueda y búsqueda vectorial ahora disponibles en MongoDB Community Edition, obtenemos la confianza del código fuente accesible, la flexibilidad para implementar en cualquier lugar y la promesa de una extensibilidad impulsada por la comunidad. &quot;Es un hito emocionante que reafirma el compromiso de MongoDB con los desarrolladores”. —Luke Thompson, MongoDB Champion, Clarifresh.</p>
<p>“Estamos entusiasmados con la siguiente iteración de experiencias de búsqueda en MongoDB Community Edition. &quot;Nuestros clientes desean la máxima flexibilidad para poder Ejecutar sus aplicaciónes de búsqueda y habilitadas para IA generativa, y llevar esta funcionalidad a la Community desbloquea una forma completamente nueva de compilar y probar en cualquier lugar.”—Jerry Liu, CEO de LlamaIndex.</p>
<p>Participar en la vista previa privada de búsqueda de texto completo y búsqueda vectorial para la MongoDB Community ha sido una oportunidad emocionante. Tener $search, $searchMeta y $vectorSearch directamente en Community Edition ofrece las mismas potentes capacidades que utilizamos en Atlas, sin necesidad de sistemas ni integraciones adicionales. &quot;Incluso en la vista previa inicial, ya está optimizando los flujos de trabajo y produciendo resultados más rápidos y relevantes”. —Michael Höller, MongoDB Champion, akazia consultoría.</p>
<h2>Accediendo a la vista previa pública</h2>
<p>La vista previa pública está disponible de forma gratuita y está destinada únicamente a fines de prueba, evaluación y comentarios.</p>
<p><b>Búsqueda y búsqueda vectorial con MongoDB Community Edition.</b> Las nuevas capacidades son compatibles con MongoDB versión 8.2+ y operan en un binario separado, mongot, que tiene interacción con el binario estándar de la base de datos mongodb.</p>
<p>Para empezar, asegúrese de que:</p>
<ul>
	<font size="4">
<li>Se está ejecutando un clúster de MongoDB Community Server utilizando uno de los tres métodos siguientes:</li>
<ul style="list-style-type: lower-alpha; padding-bottom: 0;">
  <li style="margin-left:2em">Descargue MongoDB Community Server versión 8.2 desde la <a href="https://www.mongodb.com/try/download/community">página de descargas de MongoDB</a>. A partir de la vista previa pública, esta característica está disponible para implementaciones autogestionadas en distribuciones y arquitecturas de Linux compatibles con MongoDB Community Edition versión 8.2+. </li>
  <li style="margin-left:6em; padding-bottom: 0;">Descargue el binario de mongot desde la <a href="https://www.mongodb.com/try/download/search-in-community">página de descargas de MongoDB</a>.</li>
	<li style="margin-left:2em; padding-bottom: 0;">Extraiga la imagen del container para Community Server 8.2 desde un Repositorio público de <a href="https://hub.docker.com/r/mongodb/mongodb-community-server" target="_blank">Docker Hub</a>. </li>
	<li style="margin-left:2em; padding-bottom: 0;"><b><i>Próximamente:</i></b> implementar usando los Controladores de MongoDB para el Operador de Kubernetes (el soporte de búsqueda para Community Server está previsto para la <a href="https://www.mongodb.com/docs/kubernetes/current/release-notes/">versión 1.5 y posteriores</a>).</li>
 </ul>
</li>
	</font>
</ul>
<p><b>Búsqueda y Vector Search para usar con MongoDB Enterprise Server.</b> Las nuevas capacidades se implementan como Nodos de búsqueda autogestionados en el entorno de Kubernetes de un cliente. Esto se conectará sin problemas a cualquier clúster de MongoDB Enterprise Server, que resida dentro o fuera del propio Kubernetes.</p>
<p>Para empezar, asegúrese de que:</p>
<ul>
	<font size="4">
<li>Se está ejecutando un clúster de MongoDB Enterprise Server.</li>
<ul style="list-style-type: lower-alpha; padding-bottom: 0;">
  <li style="margin-left:2em">versión 8.0.10+ (para MongoDB Controllers para Operador de Kubernetes 1.4).</li>
  <li style="margin-left:2em; padding-bottom: 0;">versión 8.2+ (para el operador de MongoDB Controllers para Kubernetes 1.5+).</li>
		</ul>
		</li>
	<li>Un entorno de Kubernetes.</li>
	<li>El MongoDB Controllers for Kubernetes Operator está instalado en Kubernetes cluster. Encuentre las instrucciones de instalación <a href="https://www.mongodb.com/docs/kubernetes/current/">aquí</a>.</li>
	</font>
</ul>
<p>También está disponible la documentación completa para configurar <a href="https://www.mongodb.com/docs/manual/installation/">MongoDB Community Edition</a> y <a href="https://www.mongodb.com/docs/kubernetes/current/fts-vs-deployment/">MongoDB Enterprise Server</a>.</p>
<h2>¿Qué sigue?</h2>
<p>Durante la vista previa pública, MongoDB actualizará características adicionales y características del plan de desarrollo basadas en los comentarios de los clientes. Después de la vista previa pública, se anticipa que estas capacidades de búsqueda y búsqueda vectorial estarán generalmente disponibles para su uso con implementaciones on-premises. Para Community Edition, estas capacidades estarán disponibles sin costo adicional como parte de la <a href="https://www.mongodb.com/legal/licensing/server-side-public-license">Server Side Public License (SSPL)</a>.</p>
<p>Para MongoDB Enterprise Server, estas capacidades se incluirán en una nueva oferta de suscripción de pago que se lanzará en el futuro. Los detalles de precios y empaquetado de la suscripción estarán disponibles más cerca del lanzamiento. Para los desarrolladores que buscan una experiencia totalmente gestionada en la nube, <a href="https://www.mongodb.com/products/platform/atlas-database">MongoDB Atlas</a> ofrece una versión lista para producción de estas capacidades hoy.</p>
<p>¡ A MongoDB le encantaría recibir retroalimentación; also: comentarios! Sugiera nuevas características o vote sobre ideas existentes en <a href="http://feedback.mongodb.com" target="_blank">feedback.mongodb.com</a>. La entrada es crucial para moldear el futuro de este producto. Los usuarios pueden ponerse en contacto con su equipo de cuentas de MongoDB para proporcionar comentarios más completos.</p>
<div class="callout">
<p><b>Consulte la documentación de MongoDB para aprender cómo empezar con la búsqueda y Vector Search en <a href="https://www.mongodb.com/docs/atlas/atlas-search/tutorial/">MongoDB Community Edition</a> y <a href="https://www.mongodb.com/docs/kubernetes/current/fts-vs-deployment/">MongoDB Enterprise Server</a>.</b></p>
</div>	
<hr>
<p><small><sup>1</sup> MongoDB se puede implementar como un servicio multi-nube totalmente gestionado en todos los principales proveedores de nube pública, en nubes privadas, localmente, on-premises y en entornos híbridos.</small></p>
]]></description>
      <pubDate>Wed, 17 Sep 2025 13:04:46 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/product-release-announcements/supercharge-self-managed-apps-search-vector-search-capabilities-es</link>
      <guid>https://www.mongodb.com/company/blog/product-release-announcements/supercharge-self-managed-apps-search-vector-search-capabilities-es</guid>
    </item><item>
      <title>通过搜索和向量搜索功能，为自管理应用加速赋能</title>
      <description><![CDATA[<p>MongoDB 欣然宣布，搜索和向量搜索功能现已进入公测阶段，可用于 MongoDB Community Edition 和 MongoDB Enterprise Server。这些新功能使开发者能够在具备强大搜索功能的自管理环境中，直接对复杂的 AI 驱动型应用进行原型设计、迭代和构建。</p>
<p>多样性是开发者青睐 MongoDB 的重要原因之一。MongoDB 可以在任何地方运行。<sup>1</sup>这包括从许多开发者开启 MongoDB 之旅的本地环境部署，到企业规模化时所需的超大型数据中心，以及 MongoDB 的全托管云服务：<a href="https://www.mongodb.com/products/platform/atlas-database">MongoDB Atlas</a>。无论开发在何处进行，MongoDB 都能轻松地与任何开发者的工作流集成。</p>
<p><a href="https://www.mongodb.com/products/self-managed/community-edition">MongoDB Community Edition</a> 是 MongoDB 的免费、源代码可用版本，全球数百万开发者借助它来学习、测试并提升技能。<a href="https://www.mongodb.com/try/download/enterprise">MongoDB Enterprise Server</a> 是 MongoDB 核心数据库的商业版本。该版本为企业级用户提供了更多增强功能，适用于那些倾向于在本地、或公有云、私有云及混合云环境中自主管理部署的客户。</p>
<p>随着原生搜索和向量搜索功能现已支持 Community Edition 及 Enterprise Server，MongoDB 致力于为开发者提供更简洁统一的体验，助力他们在任意部署环境中构建卓越应用。</p>
<h2>什么是搜索和向量搜索？</h2>
<p>与 MongoDB Atlas 中的产品类似，MongoDB Community Edition 和 MongoDB Enterprise Server 现在支持两种不同但互补的搜索功能：</p>
<ul>
	<font size="4">
		<li><a href="https://www.mongodb.com/resources/basics/full-text-search">全文搜索</a>是一项内置功能，可为构建基于相关性的应用功能提供无缝且可扩展的体验。</li>
		<li><a href="https://www.mongodb.com/resources/basics/vector-search">向量搜索</a>使开发者能够利用原生、功能完备的向量数据库功能，构建由语义搜索和生成式人工智能驱动的智能应用程序。</li>
	</font>
</ul>
在此公开预览版中，核心搜索聚合阶段没有功能限制。因此，$search、$searchMeta 和 $vectorSearch 都得到了支持，其功能与 Atlas 中提供的保持一致，但不包括仍处于预览状态的功能。有关更多信息，请查看<a href="https://www.mongodb.com/docs/atlas/atlas-search/">搜索</a>和<a href="https://www.mongodb.com/docs/atlas/atlas-vector-search/vector-search-overview/">向量搜索</a>的文档页面。
<h2>通过集成搜索解决开发者面临的挑战</h2>
<p>从历史上看，将高级搜索功能集成到自管理应用程序中通常需要将外部搜索引擎或矢量数据库连接到 MongoDB。这种模式给开发者和企业带来了重重阻碍，导致：</p>
<ul>
	<font size="4">
		<li>架构复杂性：在多个不同系统之间管理和同步数据增加了复杂层次，要求额外的技能，并使开发工作流程变得更加复杂。</li>
		<li>运维负担：为每个系统单独处理配置、安全、升级和监控，给 DevOps 团队带来了沉重的工作负担。</li>
		<li>降低开发者的工作效率：开发者不得不为数据库和搜索引擎学习和使用不同的查询 API 和语言。这导致了频繁的上下文切换、更陡峭的学习曲线和更慢的发布周期。</li>
		<li>一致性挑战：将主数据库与独立的搜索索引或向量索引对齐存在产生不同步结果的风险。尽管有事务保证和数据一致性的承诺，这些索引实际上仅能实现最终一致性。在快速变化的环境中，这导致了检索结果不完整的问题。</li>
	</font>
</ul>	
<p>随着搜索和向量搜索现已集成到 MongoDB Community Edition 和 MongoDB Enterprise Server 中，这些需要权衡取舍的问题不复存在。开发者现在可以利用 MongoDB 熟悉的查询框架创建强大的搜索功能，从而摆脱同步负担以及管理多个单一用途系统的需求。这一版本简化了数据架构，降低了运营开销，并加速了应用程序开发。</p>
<p>借助这些功能，开发者可以利用复杂的开箱即用功能来构建各种功能强大的应用程序。潜在的使用案例包括：</p>
<figure>
<div class="fl-center">
<img src="https://webassets.mongodb.com/_com_assets/cms/Screenshot 2025-09-30 at 10.16.58 AM-jmzol0lxhc.png" alt=" " title=" " style="width: 800px"/>
</div>
<figcaption class="fl-center"> </figcaption>
</figure>
<p>MongoDB 提供与 <a href="https://www.mongodb.com/docs/atlas/ai-integrations/langchain/#std-label-langchain">LangChain</a>、<a href="https://www.mongodb.com/docs/atlas/ai-integrations/langgraph/#std-label-langgraph">LangGraph</a> 和 <a href="https://www.mongodb.com/docs/atlas/ai-integrations/llamaindex/#std-label-llamaindex">LlamaIndex</a> 等框架的原生集成。这简化了工作流程，加快了开发速度，并将 RAG 或智能体功能直接嵌入到应用程序中。要了解更多 MongoDB 支持的其他人工智能框架，请查阅此<a href="https://www.mongodb.com/docs/atlas/ai-integrations/">文档</a>。</p>
<p><b>MongoDB 的合作伙伴和拥护者们已经在更广泛的环境中使用搜索和向量搜索，并从中受益</b></p>
<p>“我们非常高兴 MongoDB 的搜索和向量搜索功能如今已经可以在广受欢迎的 MongoDB Community Edition 中使用。现在，我们的客户可以在任意部署模式和其首选环境中结合使用 MongoDB 和 LangChain，来构建最前沿的 LLM 应用程序。”—LangChain 的 CEO Harrison Chase</p>
<p>“MongoDB 帮助 Clarifresh 构建了出色的软件，其坚如磐石的基础一直令我印象深刻。如今，搜索和向量搜索功能已在 MongoDB Community Edition 中提供，这使我们获得了可访问源代码的信心、随处部署的灵活性以及社区驱动扩展性的承诺。这是一个令人振奋的里程碑，它再次印证了 MongoDB 对开发者的承诺。”—Clarifresh 公司 MongoDB 大使 Luke Thompson</p>
<p>“我们对 MongoDB Community Edition 中下一代搜索体验感到非常兴奋。我们的客户希望拥有最高的灵活性，以便运行他们的搜索和生成式 AI 应用程序，而将这一功能引入 Community 版本则解锁了一种全新的方式，可以在任意环境中进行构建和测试。”—LlamaIndex CEO Jerry Liu。</p>
<p>“参与 MongoDB Community 版全文搜索和向量搜索的私有预览版是一段令人兴奋的经历。将 $search、$searchMeta 和 $vectorSearch 直接引入 Community Edition，为我们带来了在 Atlas 中使用的同样强大的功能，而无需额外的系统或集成。即使在早期预览阶段，它已经在简化工作流程，并带来更快、更相关的结果。”—akazia Consulting 的 MongoDB 大使 Michael Höller</p>
<h2>访问公共预览版</h2>
<p>公共预览版免费提供，仅用于测试、评估和反馈目的。</p>
<p><b>使用 MongoDB Community Editionmmunity Edition 进行搜索和向量搜索。</b>这些新功能兼容 MongoDB 8.2 及以上版本，并在一个独立的二进制文件 mongot 上运行，该文件与标准的 mongodb 数据库二进制文件进行交互。</p>
<p>若要开始，请确保：</p>
<ul>
	<font size="4">
<li>MongoDB Community Server 集群通过以下三种方法之一运行：</li>
<ul style="list-style-type: lower-alpha; padding-bottom: 0;">
  <li style="margin-left:2em">从 <a href="https://www.mongodb.com/try/download/community">MongoDB 下载页面</a>下载 MongoDB Community Server 版本 8.2。从公共预览版开始，该功能已可用于支持的 Linux 发行版和架构上自管理部署的 MongoDB Community Edition 8.2 及以上版本。</li>
  <li style="margin-left:6em; padding-bottom: 0;">从 <a href="https://www.mongodb.com/try/download/search-in-community">MongoDB 下载页面</a>下载 mongot 二进制文件。</li>
	<li style="margin-left:2em; padding-bottom: 0;">从公共 <a href="https://hub.docker.com/r/mongodb/mongodb-community-server" target="_blank">Docker Hub</a> 存储库拉取 Community Server 8.2 的容器镜像。</li>
	<li style="margin-left:2em; padding-bottom: 0;"><b><i>即将推出：</b></i>使用适用于 Kubernetes Operator 的 MongoDB 控制器进行部署（Community Server 的搜索支持功能计划在 <a href="https://www.mongodb.com/docs/kubernetes/current/release-notes/">1.5 及以上版本</a>提供）。 <b></li>
 </ul>
</li>
	</font>
</ul>
<p>搜索和向量搜索可用于 MongoDB Enterprise Server。这些新功能被部署为自管理搜索节点，位于客户的 Kubernetes 环境中。这将无缝连接到任何 MongoDB Enterprise Server 集群，无论是在 Kubernetes 内部还是外部。</p>
<p>若要开始，请确保：</p>
<ul>
	<font size="4">
<li>MongoDB Enterprise Server 集群正在运行。</li>
<ul style="list-style-type: lower-alpha; padding-bottom: 0;">
  <li style="margin-left:2em">版本 8.0.10+（适用于 Kubernetes operator 1.4 的 MongoDB 控制器）。</li>
  <li style="margin-left:2em; padding-bottom: 0;">版本 8.2+（适用于 Kubernetes operator 1.5+ 的 MongoDB 控制器）。</li>
		</ul>
		</li>
	<li>Kubernetes 环境。</li>
	<li>适用于 Kubernetes Operator 的 MongoDB 控制器安装在 Kubernetes 集群中。<a href="https://www.mongodb.com/docs/kubernetes/current/">请在此处查找安装说明</a>。</li>
	</font>
</ul>
<p><a href="https://www.mongodb.com/docs/manual/installation/">MongoDB Community Edition</a> 和 <a href="https://www.mongodb.com/docs/kubernetes/current/fts-vs-deployment/">MongoDB Enterprise Server</a> 的完整安装文档也可供参考。</p>
<h2>接下来的步骤</h2>
<p>在公测期间，MongoDB 将根据客户反馈提供额外的更新和路线图功能。在公测结束后，这些搜索和向量搜索功能预计将普遍用于本地部署。对于 Community Edition，这些功能将作为服务器端公共许可证 (<a href="https://www.mongodb.com/legal/licensing/server-side-public-license">SSPL</a>) 的一部分免费提供。</p>
<p>对于 MongoDB Enterprise Server，这些功能将包含在未来推出的新付费订阅服务中。订阅的定价和包装详细信息将在临近发布时提供。对于寻求在云中获得完全托管体验的开发者，<a href="https://www.mongodb.com/products/platform/atlas-database">MongoDB Atlas</a> 目前提供了这些功能的生产就绪版本。</p>
<p>MongoDB 非常乐意倾听您的反馈！欢迎在 <a href="http://feedback.mongodb.com" target="_blank">feedback.mongodb.com</a> 提出新功能建议或为现有想法投票。您的反馈对于塑造该产品的未来至关重要。用户可以联系 MongoDB 客户团队以提供更全面的反馈。</p>
<div class="callout">
<p>查阅 MongoDB 文档，了解如何在 <a href="https://www.mongodb.com/docs/atlas/atlas-search/tutorial/">MongoDB Community Edition</a> 和 <a href="https://www.mongodb.com/docs/kubernetes/current/fts-vs-deployment/">MongoDB Enterprise Server</a> 中开始使用搜索和向量搜索。</p>
</div>	
<hr>
<p><small><sup>1</sup> MongoDB 可以作为全托管的多云服务部署，支持所有主要公有云提供商，也可在私有云、本地、本地数据中心以及混合环境中部署。</small></p>
]]></description>
      <pubDate>Wed, 17 Sep 2025 13:04:46 GMT</pubDate>
      <link>https://www.mongodb.com/company/blog/product-release-announcements/supercharge-self-managed-apps-search-vector-search-capabilities-cn</link>
      <guid>https://www.mongodb.com/company/blog/product-release-announcements/supercharge-self-managed-apps-search-vector-search-capabilities-cn</guid>
    </item></channel></rss>