Product Updates
The most recent MongoDB product releases and updates
MongoDB Atlas CLI: Full API Coverage and Faster Updates
We’re thrilled to announce that starting today, you can access every feature in the MongoDB Atlas Administration API from the MongoDB Atlas CLI . This significant enhancement means that you’ll also get new features quicker than ever, within just days of their launch. No more hoping for feature support or switching between interfaces. If it’s in the MongoDB Atlas Admin API, it’s in your Atlas CLI. Full parity with the API Until now, you had to wait for the Atlas CLI team to manually implement support for new MongoDB Atlas Administration API endpoints. Those days are over. Every MongoDB Atlas capability, whether it launched today or has been around for years, will now automatically become available from your command line. With the new atlas api subcommands, you get: Full feature parity with the MongoDB Atlas Administration API. Quicker access to future MongoDB Atlas Administration API features. A unified, predictable command structure that makes automation easy. Simplified handling of long-running operations with the --watch flag, eliminating the need for complex polling logic. The ability to pin a desired API version, ensuring your scripts remain reliable even if you update the CLI. The reason we built this We prioritized this feature to ensure our users could use all capabilities exposed by the MongoDB Atlas Administration API through the Atlas CLI without delays. This isn’t about adding new capabilities to MongoDB Atlas but about making existing API functionality accessible through the Atlas CLI, eliminating the previous gap in functionality. Our goal was simple: If a feature exists in the MongoDB Atlas Administration API, you should be able to access it through the CLI immediately, not weeks or months later. Now you can. The CLI simplifies API interactions With the arrival of the complete coverage of the MongoDB Atlas Administration API in the MongoDB Atlas CLI , users no longer need to implement workarounds in order to make use of these endpoints. For example: Authentication: The CLI makes API interactions simpler by automatically handling authentication. This means you don’t have to manage tokens or credentials on your own when making requests to endpoints that need authentication. Monitoring long-running operations: The CLI offers a powerful --watch flag for all API subcommands. This feature automatically monitors long-running operations until they are completed, eliminating the need to implement polling loops manually. Without this flag, you would have to repeatedly check the operation status by directly calling the API. Let’s take a look at an example of how the --watch flag simplifies waiting for long-running operations. atlas api clusters createCluster --file clusterspec.json --watch This command creates a cluster and waits until it’s fully provisioned before returning, eliminating the need for complex polling logic in your scripts. Practical applications The atlas api subcommands enable powerful workflows that were previously unavailable in the CLI: Cluster outage simulation : Simulate regional outage scenarios directly through the CLI. You can now script tests that simulate an entire cloud provider’s region being down, helping ensure your applications remain resilient during actual outages. Invoice investigation : Generate custom reports and retrieve billing information programmatically. Need to pull invoice data for your finance team? That’s now a simple CLI command away. Access tracking : Monitor and manage user access patterns across your MongoDB Atlas resources, enhancing your security posture without leaving the command line. These are just a few of the features now available through the new atlas api subcommands. Visit our documentation to explore the full range of available commands. Robust and fully documented API subcommands All atlas api subcommands are auto-generated from our OpenAPI specification , ensuring they stay up to date with the latest Atlas Administration API features. Additionally, these subcommands are versioned, which ensures that your scripts won’t break when the API updates —a critical feature for reliable automation. For detailed information on syntax and usage, please refer to our comprehensive documentation . Status: In public preview and ready for your feedback The introduction of atlas api subcommands represents a significant advancement in making MongoDB Atlas more accessible and automatable. By bringing the full power of the MongoDB Atlas Administration API to the command line, we’re enabling anyone who automates their MongoDB Atlas cloud to work more efficiently. Whether you’re managing infrastructure, implementing testing protocols, or generating reports, these new capabilities can transform your MongoDB Atlas experience—all without leaving the command line. As this feature is currently in public preview, we’re actively seeking your input. Here is how to get started: Get the latest CLI: Update your Atlas CLI today to access these new subcommands. Try an example: Try from this list of example Atlas CLI commands . Provide feedback: Share your thoughts on how we can improve through our feedback forum . Your feedback helps us understand how you’re using these capabilities and what improvements would make them even more valuable to your workflows. Learn more about the MongoDB Atlas CLI through our documentation .
Spring Data MongoDB: Now with Vector Search and Queryable Encryption
MongoDB is pleased to announce new enhancements to the Spring Data MongoDB library with the release of version 4.5.0 , increasing capabilities related to vector search, vector search index creation, and queryable encryption. Spring Data MongoDB makes it easier for developers to integrate MongoDB into their Java applications, taking advantage of a potent combination of powerful MongoDB features and familiar Spring conventions. Vector search Vector embeddings convert disparate types of data into numbers that capture meaning and relationships. Many types of data—words, sentences, images, even videos—can be represented by a vector embedding for use in AI applications. In MongoDB, you can easily store and index vector embeddings alongside your other document data—no need to manage a separate vector database or maintain an ETL pipeline. In MongoDB, an aggregation pipeline consists of one or more stages that process documents, performing operations such as $count and $group . $vectorSearch is an aggregation pipeline stage for handling vector retrieval. It was released in MongoDB 6.0, and improved upon in MongoDB 7.0 and 8.0. Using the $vectorSearch stage to pre-filter your data and perform a semantic search against indexed fields, you can easily process vector embeddings in your aggregation pipeline. Vector search indexes Like other retrieval techniques, indexes are a key part of implementing vector search, allowing you to narrow the scope of your semantic search and exclude irrelevant vector embeddings. This is useful in an environment where it isn’t necessary to consider every vector embedding for comparison. Let’s see how easy it is to create a vector search index with Spring Data MongoDB 4.5.0! VectorIndex index = new VectorIndex("vector_index") .addVector("plotEmbedding", vector -> vector.dimensions(1536).similarity(COSINE)) .addFilter("year"); mongoTemplate.searchIndexOps(Movie.class) .createIndex(index); As you can see, the VectorIndex class offers intuitive methods such as addVector and addFilter that allow you to define exactly, with native Spring Data APIs, the vector you want to initialize. To actually execute a search operation that leverages the index, just issue an aggregation: VectorSearchOperation search = VectorSearchOperation.search("vector_index") .searchType(VectorSearchOperation.SearchType.ENN) .path("plotEmbedding") .vector( ... ) .limit(10) .numCandidates(150) .withSearchScore("score"); AggregationResults<MovieWithSearchScore> results = mongoTemplate .aggregate(newAggregation(Movie.class, search), MovieWithSearchScore.class); Leverage the power of MongoDB to run sophisticated vector search, directly from Spring. Queryable Encryption Support for vector search isn’t the only enhancement found in 4.5.0. Now, you can pass encryptedFields right into your CollectionsOptions class, giving Spring the context to understand which fields are encrypted. This context allows Spring to leverage the power of MongoDB Queryable Encryption (QE) to keep sensitive data protected in transit, at rest, or in use. QE allows you to encrypt sensitive application data, store it securely in an encrypted state in the MongoDB database, and perform equality and range queries directly on the encrypted data. Let’s look at how easy it is to create an encrypted collection with Spring Data MongoDB: CollectionOptions collectionOptions = CollectionOptions.encryptedCollection(options -> options .queryable(encrypted(string("ssn")).algorithm("Indexed"), equality().contention(0)) .queryable(encrypted(int32("age")).algorithm("Range"), range().contention(8).min(0).max(150)) .queryable(encrypted(int64("address.sign")).algorithm("Range"), range().contention(2).min(-10L).max(10L)) ); mongoTemplate.createCollection(Patient.class, collectionOptions); By declaring upfront the options allowed for different fields of the new collection, Spring and MongoDB work together to keep your data safe! We’re excited for you to start incorporating these exciting new features into applications built with Spring Data MongoDB. Here are some resources to help you get started: Explore the Spring Data MongoDB documentation Check out the GitHub repository Read the release notes for Spring Data MongoDB 4.5.0
Now in Public Preview: The MongoDB for IntelliJ Plugin
The MongoDB for IntelliJ plugin empowers Java developers to build and ship applications quickly and confidently by enhancing the Database Explorer experience in the IntelliJ IDEA. After first announcing the plugin in private preview at .local London in the fall of 2024, we’ve partnered with our friends at JetBrains to release a new and improved experience in public preview. Using the MongoDB for IntelliJ plugin, developers can analyze their application code alongside their database, accelerating query development, validating accuracy, and highlighting anti-patterns with proactive performance insights. What’s in the MongoDB for IntelliJ plugin? As part of the public preview, we’re committed to ensuring that the MongoDB for IntelliJ plugin not only meets developers' technical requirements but also paves the way for a seamless developer experience with MongoDB Atlas . The MongoDB for IntelliJ plugin Public Preview offers developers the following capabilities: Field-level autocompletion for Java queries - Auto-suggests field names from MongoDB collections as developers write queries. Schema and type validation - Surfaces inline warnings when query values don’t match the expected field type based on the collection schema, and validates that a field exists in your collection’s schema. Java query execution in IntelliJ console - Allows developers to test Java queries with a single click without needing to switch tools or translate syntax. Proactive anti-pattern detection - Identifies potential performance issues (such as a query missing an index) and provides inline warnings and documentation links. Spring and Java driver support - Supports query syntax across popular Java patterns, criteria API, and aggregation patterns. Code smarter with your AI - Plugin-generated linting insights help your in-IDE AI assistant detect and resolve code issues. Figure 1. Code smarter with your AI. Benefits of using the official MongoDB for IntelliJ plugin Java development often involves working with complex, evolving data models, making MongoDB’s flexible document model an ideal choice for Java applications' data layer. The plugin provides developers with a unified experience for building with MongoDB directly inside IntelliJ, enabling faster and more focused development. By eliminating the need to switch between IntelliJ and external tools, the plugin streamlines query development and testing workflows. Features like field-level autocomplete and inline schema validation reduces errors before runtime, allowing developers to build and validate MongoDB queries with confidence and speed. Whether writing queries with the MongoDB Java driver, Spring Data, or aggregation pipelines, the plugin provides context-aware suggestions and real-time feedback that accelerate development. Additionally, the plugin proactively flags common MongoDB query anti-patterns—such as missing indexes or inefficient operators—within your line of code, helping teams catch performance issues before they hit production. With the ability to test queries directly in the IntelliJ MongoDB console and view execution metadata like query plans and durations, the plugin brings performance awareness and code correctness to where developers actually write the code for their applications. How to get started with the MongoDB for IntelliJ plugin You can get started using the MongoDB for IntelliJ plugin through the JetBrains marketplace . Questions? Feedback? Please post on our community forums or through UserVoice . We value your input as we continue to develop a compelling offering for the Java community.
Mongoose Now Natively Supports QE and CSFLE
Mongoose 8.15.0 has been released, which adds support for the industry-leading encryption solutions available from MongoDB. With this update, it’s simpler than ever to create documents leveraging MongoDB Queryable Encryption (QE) and Client-Side Level Field Encryption (CSFLE), keeping your data secure when it is in use. Read on to learn more about approaches to encrypting your data when building with MongoDB and Mongoose. What is Mongoose? Mongoose is a library that enables elegant object modeling for Node.js applications working with MongoDB. Similar to an Object-Relational Mapper (ORM), the Mongoose Object Document Mapper (ODM) simplifies programmatic data interaction through schemas and models. It allows developers to define data structures with validation and provides a rich API for CRUD operations, abstracting away many of the complexities of the underlying MongoDB driver. This integration enhances productivity by enabling developers to work with JavaScript objects instead of raw database queries, making it easier to manage data relationships and enforce data integrity. What is QE and CSFLE? Securing sensitive data is paramount. It must be protected at every stage—whether in transit, at rest, or in use. However, implementing in-use encryption can be complex. MongoDB offers two approaches to make it easier: Queryable Encryption (QE) and Client-Side Level Field Encryption (CSFLE). QE allows customers to encrypt sensitive application data, store it securely in an encrypted state in the MongoDB database, and perform equality and range queries directly on the encrypted data. An industry-first innovation, QE eliminates the need for costly custom encryption solutions, complex third-party tools, or specialized cryptography knowledge. It employs a unique structured encryption schema, developed by the MongoDB Cryptography Research Group , that simplifies the encryption of sensitive data while enabling equality and range queries to be performed directly on data without having to decrypt it. The data remains encrypted at all stages, with decryption occurring only on the client side. This architecture supports solidified strict access controls, where MongoDB and even an organization’s own database administrators (DBAs) don’t have access to sensitive data. This design enhances security by keeping the server unaware of the data it processes, further mitigating the risk of exposure and minimizing the potential for unauthorized access. Adding QE/CSFLE auto-encryption support for Mongoose The primary goal of the Mongoose integration with QE and CSFLE is to provide idiomatic support for automatic encryption, simplifying the process of creating encrypted models. With native support for QE and CSFLE, Mongoose allows developers to define encryption options directly within their schemas without the need for separate configurations. This first-class API enables developers to work within Mongoose without dropping down to the driver level, minimizing the need for significant code changes when adopting QE and CSFLE. Mongoose streamlines configuration by automatically generating the encrypted field map. This ensures that encrypted fields align perfectly with the schema and simplifies the three-step process typically associated with encryption setup, shown below. Mongoose also keeps the schema and encrypted fields in sync, reducing the risk of mismatches. Developers can easily declare fields with the encrypt property and configure encryption settings, using all field types and encryption schemes supported by QE and CSFLE. Additionally, users can manage their own encryption keys, enhancing control over their encryption processes. This comprehensive approach empowers developers to implement robust encryption effortlessly while maintaining operational efficiency. Pre-integration experience const kmsProviders = { local: { key: Buffer.alloc(96) }; const keyVaultNamespace = 'data.keys'; const extraOptions = {}; const encryptedDatabaseName = 'encrypted'; const uri = '<mongodb URI>'; const encryptedFieldsMap = { 'encrypted.patent': { encryptedFields: EJSON.parse('<EJSON string containing encrypted fields, either output from manual creation or createEncryptedCollection>', { relaxed: false }), } }; const autoEncryptionOptions = { keyVaultNamespace, kmsProviders, extraOptions, encryptedFieldsMap }; const schema = new Schema({ patientName: String, patientId: Number, field: String, patientRecord: { ssn: String, billing: String } }, { collection: 'patent' }); const connection = await createConnection(uri, { dbName: encryptedDatabaseName, autoEncryption: autoEncryptionOptions, autoCreate: false, // If using createEncryptedCollection, this is false. If manually creating the keyIds for each field, this is true. }).asPromise(); const PatentModel = connection.model('Patent', schema); const result = await PatentModel.find({}).exec(); console.log(result); This example demonstrates the manual configuration required to set up a Mongoose model for QE and CSFLE, requiring three different steps to: Define an encryptedFieldsMap to specify which fields to encrypt Configure autoEncryptionOptions with key management settings Create a Mongoose connection that incorporates these options This process can be cumbersome, as it requires explicit setup for encryption. New experience with Mongoose 8.15.0 const schema = new Schema({ patientName: String, patientId: Number, field: String, patientRecord: { ssn: { type: String, encrypt: { keyId: '<uuid string of key id>', queries: 'equality' } }, billing: { type: String, encrypt: { keyId: '<uuid string of key id>', queries: 'equality' } }, } }, { encryptionType: 'queryableEncryption', collection: 'patent' }); const connection = mongoose.createConnection(); const PatentModel = connection.model('Patent', schema); const keyVaultNamespace = 'client.encryption'; const kmsProviders = { local: { key: Buffer.alloc(96) }; const uri = '<mongodb URI>'; const keyVaultNamespace = 'data.keys'; const autoEncryptionOptions = { keyVaultNamespace, kmsProviders, extraOptions: {} }; await connection.openUri(uri, { autoEncryption: autoEncryptionOptions}); const result = await PatentModel.find({}).exec(); console.log(result); This "after experience" example showcases how the integration of QE and CSFLE into Mongoose simplifies the encryption setup process. Instead of the previous three-step approach, developers can now define encryption directly within the schema. In this implementation, fields like ssn and billing are marked with an encrypt property, allowing for straightforward configuration of encryption settings, including the keyId and query types. The connection to the database is established with a single call that includes the necessary auto-encryption options, eliminating the need for a separate encrypted fields map and complex configurations. This streamlined approach enables developers to work natively within Mongoose, enhancing usability and reducing setup complexity while maintaining robust encryption capabilities. Learn more about QE/CSFLE for Mongoose We’re excited for you to build secure applications with QE/CSFLE for Mongoose. Here are some resources to get started with: Learn how to set up use Mongoose with MongoDB through our tutorial. Check out our documentation to learn when to choose QE vs. CSFLE . Read Mongoose CSFLE documentation .
MongoDB Atlas Stream Processing Now Supports Session Windows!
We're excited to announce that MongoDB Atlas Stream Processing now supports Session Windows ! This powerful feature lets you build streaming pipelines that analyze and process related events that occur together over time, grouping them into meaningful sessions based on periods of activity. For instance, you can now track all of a customer’s interactions during a shopping journey, treating it as a single session that ends when they’re inactive for a specified period of time. Whether you're analyzing user behavior, monitoring IoT device activities, or tracking system operations, Atlas Stream Processing’s Session Windows make it easy to transform your continuous data streams into actionable insight, and make the data available wherever you need to use it. What are Session Windows? Session Windows are a powerful way to analyze naturally occurring activity patterns in your data by grouping related events that happen close together in time. Think of how users interact with websites or apps—they tend to be active for a period, then take breaks, then return for another burst of activity. Session Windows automatically detect these patterns by identifying gaps in activity, allowing you to perform aggregations and transformations on these meaningful periods of activity. As an example, imagine you're an e-commerce company looking to better understand what your customers do during each browsing session to help improve conversions. With Atlas Stream Processing, you can build a pipeline that: Collects all the product pages a user visits during their browsing session Records the name, category, and price of each item viewed, plus whether items were added to a cart Automatically considers a session complete after 15 minutes of user inactivity Sends the session data to cloud storage to improve recommendation engines With this pipeline, you provide your recommendation engine with ready-to-use data about your user sessions to improve your recommendations in real time. Unlike fixed time-based windows ( tumbling or hopping ), Session Windows adapt dynamically to each user’s behavior patterns. How does it work? Session Windows work similarly to the hopping and tumbling windows Atlas Stream Processing already supports, but with a critical difference: while those windows open and close on fixed time intervals, Session Windows dynamically adjust based on activity patterns. To implement a Session Window, you specify three required components: partitionBy : This is the field or fields that group your records into separate sessions. For instance, if tracking user sessions, use unique user IDs to ensure each user’s activity is processed separately. gap : This is the period of inactivity that signals the end of a session. For instance, in the above example, we consider a user's session complete when they go 15 minutes without clicking on a link in the website or app. pipeline : These are the operations you want to perform on each session's data. This may include counting the number of pages a user visited, recording the page they spent the most time on, or noting which pages were visited multiple times. You then add this Session Window stage to your streaming aggregation pipeline, and Atlas Stream Processing continuously processes your incoming data, groups events into sessions based on your configuration, and applies your specified transformations. The results flow to your designated output destinations in real-time, ready for analysis or to trigger automated actions. A quick example Let’s say you want to build the pipeline that we mentioned above to track user sessions, notify them if they have items in their cart but haven’t checked out, and move their data downstream for analytics. You might do something like this: 1. Configure your source and sink stages This is where you define the connections to any MongoDB or external location you intend to receive data from (source) or send data to (sink). // Set your source to be change streams from the pageViews, cartItems, and orderedItems collections let sourceCollections = { $source: { connectionName: "ecommerce", "db": "customerActivity", "coll": ["pageViews", "cartItems", "orderedItems"] } } // Set your destination (sink) to be the userSessions topic your recommendation engine consumes data from let emitToRecommendationEngine = { $emit: { connectionName: "recommendationEngine", topic: "userSessions", } }; // Create a connection to your sendCheckoutReminder Lambda function that sends a reminder to users to check out if they have items in their cart when the session ends let sendReminderIfNeeded = { $externalFunction: { "connectionName": "operations", "as": "sendCheckoutReminder", "functionName": "arn:aws:lambda:us-east-1:123412341234:function:sendCheckoutReminder" } } 2. Define your Session Window logic This is where you specify how data will be transformed in your stream processing pipeline. // Step 1. Create a stage that pulls only the fields you care about from the change logs. // Every document will have a userId and itemId as all collections share that field. Fields not present will be null. let extractRelevantFields = { $project: { userId: "$fullDocument.userId", itemId: "$fullDocument.itemId", category: "$fullDocument.category", cost: "$fullDocument.cost", viewedAt: "$fullDocument.viewedAt", addedToCartAt: "$fullDocument.addedToCartAt", purchasedAt: "$fullDocument.purchasedAt" } }; // Step 2. By setting _id to $userId this group all the documents by the userId. Fields not present in any records will be null. let groupSessionData = { $group: { _id: "$userId", itemIds: { $addToSet: "$itemId" }, categories: { $addToSet: "$category" }, costs: { $addToSet: "$cost" }, viewedAt: { $addToSet: "$viewedAt" }, addedToCartAt: { $addToSet: "$addedToCartAt" }, purchasedAt: { $addToSet: "$purchasedAt" } } }; // Step 3. Create a session window that closes after 15 minutes of inactivity. The pipeline specifies all operations to be performed on documents sharing the same userId within the window. let createSession = { $sessionWindow: { partitionBy: "$_id", gap: { unit: "minute", size: 15}, pipeline: [ groupSessionData ] }}; 3. Create and start your stream processor The last step is simple: create and start your stream processor. // Create your pipeline array. The session data will be sent to the external function defined in sendReminderIfNeeded, and then it will be emitted to the recommendation engine Kafka topic. finalPipeline = [ sourceCollections, extractRelevantFields, createSession, sendReminderIfNeeded, emitToUserSessionTopic ]; // Create your stream processor sp.createStreamProcessor("userSessions", finalPipeline) // Start your stream processor sp.userSessions.start() And that's it! Your stream processor now runs continuously in the background with no additional management required. As users navigate your e-commerce website, add items to their carts, and make purchases, Atlas Stream Processing automatically: Tracks each user's activity in real-time Groups events into meaningful sessions based on natural usage patterns Closes sessions after your specified period of inactivity (15 minutes) Triggers reminders for users with abandoned carts Delivers comprehensive session data to your analytics systems All of this happens automatically at scale without requiring ongoing maintenance or manual intervention. Session Windows provide powerful, activity-based data processing that adapts to users' behavioral patterns rather than forcing their actions into arbitrary time buckets. Ready to get started? Log in or sign up for Atlas today to create stream processors. You can learn more about Session Windows or get started using our tutorial .
MongoDB 8.0, Predefined Roles Now Available on DigitalOcean
I’m pleased to announce that MongoDB 8.0 is now available on DigitalOcean Managed MongoDB, bringing enhanced performance, scalability, and security to DigitalOcean’s fully managed MongoDB service. This update improves query efficiency, expands encryption capabilities, and optimizes scaling for large workloads. Additionally, DigitalOcean Managed MongoDB now includes role-based access control (RBAC) with predefined roles, making it easier to manage access control, enhance security, and streamline database administration across MongoDB clusters on DigitalOcean. DigitalOcean is one of MongoDB’s premier Certified by MongoDBaaS partners, and since launching our partnership in 2021, developer productivity has been the core focus of MongoDB and DigitalOcean’s partnership together. These new enhancements to DigitalOcean Managed MongoDB are a testament to the importance of enabling developers, startups, and small and medium-sized businesses to rapidly build, deploy, and scale applications to accelerate innovation and increase productivity and agility. What’s new in MongoDB 8.0? MongoDB 8.0 features several upgrades designed to enhance its performance, security, and ease of use. Whether you’re managing high-throughput applications or looking for better query optimization, these improvements make DigitalOcean Managed MongoDB even more powerful: Higher throughput and improved replication performance: Dozens of architectural optimizations in MongoDB 8.0 have improved query and replication speed across the board. Better time series handling: Store and manage time series data more efficiently, helping to enable higher throughput with lower resource usage and costs. Expanded Queryable Encryption: MongoDB 8.0 adds range queries to Queryable Encryption, enabling new use cases for secure data operations. With encrypted searches that don’t expose sensitive data, MongoDB 8.0 enhances both privacy and compliance. Greater performance control: Set default maximum execution times for queries and persist query settings after restarts, providing more predictable database performance. MongoDB 8.0 features 36% better read throughput, 59% faster bulk writes, 200% faster time series aggregations, and new sharding capabilities that distribute data across shards up to 50 times faster—making MongoDB 8.0 the most secure, durable, available, and performant version of MongoDB yet. Learn more about MongoDB 8.0 on our release page. Benefits of RBAC for DigitalOcean Managed MongoDB Managing database access across organizations can be a challenge, especially as teams grow and security requirements become more complex. Without a structured approach, organizations risk unauthorized access, operational inefficiencies, and compliance gaps. With RBAC now available in their MongoDB environments, DigitalOcean Managed MongoDB users can avoid these risks and enforce clear, predefined access policies, helping to ensure secure, efficient, and scalable database management. Here’s how RBAC can benefit your business : Stronger data protection: Keep your sensitive information secure by ensuring that only authorized users have access, reducing the risk of data breaches and strengthening overall security. Less manual work, fewer errors: Predefined roles make it easier to manage user access, cutting down on time-consuming manual tasks and minimizing the risk of mistakes. Easier compliance management: Stay ahead of industry regulations with structured access controls that simplify audits and reporting, giving you peace of mind. Lower costs & reduced risk: Automating access management reduces administrative overhead and helps prevent costly security breaches. Seamless scalability: As your business grows, easily adjust user permissions to match evolving team structures and operational needs. Simplified access control: Manage database access efficiently by assigning roles at scale, making administration more intuitive and governance more effective. DigitalOcean Managed MongoDB: Better than ever With the introduction of MongoDB 8.0 and RBAC, DigitalOcean Managed MongoDB is now more powerful, secure, and efficient than ever. Whether you’re scaling workloads, optimizing queries, or strengthening security, these updates empower you to manage your MongoDB clusters with greater confidence and ease. Get started today and take full advantage of these cutting-edge enhancements in DigitalOcean’s Managed MongoDB! To create a new cluster with MongoDB 8.0, or to upgrade your existing cluster through the DigitalOcean Control Panel or API, check out the DigitalOcean site . Ream more about these new features in DigitalOcean's blog about MongoDB 8.0 and RBAC , or simply try DigitalOcean Managed MongoDB by getting started here !
Now Generally Available: 7 New Resource Policies to Strengthen Atlas Security
Organizations demand for a scalable means to enforce security and governance controls across their database deployments without slowing down developer productivity. To address this, MongoDB introduced resource policies in public preview on February 10th, 2025. Resource policies enable organization administrators to set up automated, organization-wide ‘guardrails’ for their MongoDB Atlas deployments. At public preview, three policies were released to this end. Today, MongoDB is announcing the general availability of resource policies in MongoDB Atlas. This release introduces seven additional policies and a new graphical user interface (GUI) for creating and managing policies. These enhancements give organizations greater control over MongoDB Atlas configurations, simplifying security and compliance automation. How resource policies enable secure innovation Innovation is essential for organizations to maintain competitiveness in a rapidly evolving global landscape. Companies with higher levels of innovation outperformed their peers financially, according to a Cornell University study analyzing S&P 500 companies between 1998 and 2023 1 . One of the most effective ways to drive innovation is by equipping developers with the right tools and giving them the autonomy to put them into action 2 . However, without standardized controls governing those tools, developers can inadvertently configure Atlas clusters to deviate from corporate or regulatory best practices. Manual approval processes for every new project create delays. Concurrently, platform teams struggle to enforce consistent security policies across the organization, leading to increased complexity and costs. As cybersecurity threats evolve daily and regulations tighten, granting developers autonomy and quickly provisioning access to essential tools can introduce risks. Organizations must implement strong security measures to maintain compliance and enable secure innovation. Resource policies empower organizations to enforce security and compliance standards across their entire Atlas environment. Instead of targeting specific user groups, these policies establish organization-wide guardrails to govern how Atlas can be configured. This reduces the risk of misconfigurations and security gaps. With resource policies, security and compliance standards are applied automatically across all Atlas projects and clusters. This eliminates the need for manual approvals. Developers gain self-service access to the resources they need while remaining within approved organizational boundaries. Simultaneously, platform teams can centrally manage resource policies to ensure consistency and free up time for strategic initiatives. Resource policies strengthen security, streamline operations, and help accelerate innovation by automating guardrails and simplifying governance. Organizations can scale securely while empowering developers to move faster without compromising compliance. What resource policies are available? table, th, td { border: 1px solid black; } Policy Type Description Available Since Restrict cloud provider Ensure clusters are only deployed on approved cloud providers (AWS, Azure, or Google Cloud). This prevents accidental or unauthorized deployments in unapproved environments. This supports organizations in meeting regulatory or business requirements. Public preview
Modernize On-Prem MongoDB With Google Cloud Migration Center
Shifting your business infrastructure to the cloud offers significant advantages, including enhanced system performance, reduced operational costs, and increased speed and agility. However, a successful cloud migration isn’t a simple lift-and-shift. It requires a well-defined strategy, thorough planning, and a deep understanding of your existing environment to align with your company’s unique objectives. Google Cloud’s Migration Center is designed to simplify this complex process, acting as a central hub for your migration journey. It streamlines the transition from your on-premises servers to the Google Cloud environment, offering tools for discovery, assessment, and planning. MongoDB is excited to announce a significant enhancement to Google Cloud Migration Center: integrated MongoDB cluster assessment in the Migration Center Use Case Navigator. Google Cloud and MongoDB have collaborated to help you gain in-depth visibility into your MongoDB deployments, both MongoDB Community Edition and MongoDB Enterprise Edition , and simplify your move to the cloud. To understand the benefits of using Migration Center, let’s compare it with the process of migrating without it. Image 1. Image of the Migration Center Use Case Navigator menu, showing migration destinations for MongoDB deployments. Migrating without Migration Center Manual discovery: Without automation, asset inventories were laborious, leading to frequent errors and omissions. Complex planning: Planning involved cumbersome spreadsheets and manual dependency mapping, making accurate cost estimation and risk assessment difficult. Increased risk: Lack of automated assessment resulted in higher migration failure rates and potential data loss, due to undiscovered compatibility issues. Fragmented tooling: Disparate tools for each migration phase created inefficiencies and complexity, hindering a unified migration strategy. Higher costs and timelines: Manual processes and increased risks significantly lengthened project timelines and inflated migration costs. Specialized skill requirement: Migrating required teams to have deep specialized knowledge of all parts of the infrastructure being moved. Migrating with Migration Center When you move to the cloud, you want to make your systems better, reduce costs, and improve performance. A well-planned migration helps you do that. With Migration Center’s new MongoDB assessment, you can: Discover and inventory your MongoDB clusters: Easily identify all your MongoDB Community Server and MongoDB Enterprise Server clusters running in your on-premises environment. Gain deep insights: Understand the configuration, performance, and resource utilization of your MongoDB clusters. This data is essential for planning a successful and cost-effective migration. Simplify your migration journey: By providing a clear understanding of your current environment, Migration Center helps you make informed decisions and streamline the migration process, minimizing risk and maximizing efficiency. Use a unified platform: Migration Center is designed to be a one-stop shop for your cloud migration needs. It integrates asset discovery, cloud spend estimation, and various migration tools, simplifying your end-to-end journey. Accelerate using MongoDB Atlas : Migrate your MongoDB workloads to MongoDB Atlas running on Google Cloud with confidence. Migration Center provides the data you need to ensure a smooth transition, enabling you to fully use the scalability and flexibility of MongoDB Atlas. By providing MongoDB workload identification and guidance, the Migration Center Use Case Navigator enables you to gain valuable insights into the potential transformation journeys for your MongoDB workloads. With the ability to generate comprehensive reports on your MongoDB workload footprint, you can better understand your MongoDB databases. This ultimately enables you to update your systems and gain the performance enhancement of using MongoDB Atlas on Google Cloud, all while saving money. Learn more about Google Cloud Migration Center from the documentation . Visit our product page to learn more about MongoDB Atlas . Get started with MongoDB Atlas on Google Cloud today.
Firebase & MongoDB Atlas: A Powerful Combo for Rapid App Development
Firebase and MongoDB Atlas are powerful tools developers can use together to build robust and scalable applications. Firebase offers build and runtime solutions for AI-powered experiences, while MongoDB Atlas provides a fully managed cloud database service optimized for generative AI applications. We’re pleased to announce the release of the Firebase extension MongoDB Atlas , a direct MongoDB connector for Firebase that further streamlines the development process by enabling seamless integration between the two platforms. This extension enables developers to directly interact with MongoDB collections and documents from within their Firebase projects, simplifying data operations and reducing development time. A direct MongoDB connector, built as a Firebase extension , facilitates real-time data synchronization between Firebase and MongoDB Atlas. This enables data consistency across both platforms, empowering developers to build efficient, data-driven applications using the strengths of Firebase and MongoDB. MongoDB as a backend database for Firebase applications Firebase offers a streamlined backend for rapid application development, providing offerings like authentication, hosting, and real-time databases. However, applications requiring complex data modeling, high data volumes, or sophisticated querying often work well with MongoDB’s document store. Integrating MongoDB as the primary data store alongside Firebase addresses these challenges. MongoDB provides a robust document database with a rich query language (MongoDB Query Language), powerful indexing (including compound, geospatial, and text indexes), and horizontal scalability for handling massive datasets. This architecture enables developers to use Firebase’s convenient backend services while benefiting from MongoDB’s powerful data management capabilities. Developers commonly use Firebase Authentication for user management, then store core application data, including complex relationships and large volumes of information, in MongoDB. This hybrid approach combines Firebase’s ease of use with MongoDB’s data-handling prowess. Furthermore, the integration of MongoDB Atlas Vector Search significantly expands the capabilities of this hybrid architecture. Modern applications increasingly rely on semantic search and AI-driven features, which require efficient handling of vector embeddings. MongoDB Atlas Vector Search enables developers to perform similarity searches on vector data, unlocking powerful use cases Quick-start guide for Firebase’s MongoDB Atlas extension With the initial release of the MongoDB Atlas extension in Firebase, we are targeting the extension to perform operations such as findOne , insertOne , and vectorSearch on MongoDB. This blog will not cover how to create a Firebase application but will walk you through creating a MongoDB backend for connecting to MongoDB using our Firebase extension. To learn more about how to integrate the deployed backend into a Firebase application, see the official Firebase documentation . Install the MongoDB Atlas extension in Firebase. Open the Firebase Extensions Hub. Find and select the MongoDB Atlas extension. Or use the search bar to find “MongoDB Atlas.” Click on the extension card. Click the “Install” button. You will be redirected to the Firebase console. On the Firebase console, choose the Firebase project where you want to install the extension. Image 1. Image of the MongoDB Atlas extension’s installation page. On the installation page: Review “Billing and Usage.” Review “API Endpoints.” Review the permissions granted to the function that will be created. Configure the extension: Provide the following configuration details: MongoDB URI: The connection string for your MongoDB Atlas cluster Database Name: The name of the database you want to use Collection Name: The name of the collection you want to use Vertex AI Embedding to use: The type of embedding model from Vertex AI Vertex AI LLM model name: The name of the large language model (LLM) model from Vertex AI MongoDB Index Name: The name of the index in MongoDB MongoDB Index Field: The field that the index is created upon MongoDB Embedding Field: The field that contains the embedding vectors LLM Prompt: The prompt that will be sent to the LLM Click on “Install Extension.” Image 2. Image of the MongoDB Atlas extension created from Firebase extension hub. Once the extension is created, you can interact with it through the associated Cloud Function. Image 3. Firebase extension created cloud run function In conclusion, the synergy between Firebase extensions and MongoDB Atlas opens up exciting possibilities for developers seeking to build efficient, scalable, AI-powered applications. By using Firebase’s streamlined backend services alongside MongoDB’s robust data management and vector search capabilities, developers can create applications that handle complex data and sophisticated AI functionalities with ease. The newly introduced Firebase extension for MongoDB Atlas, specifically targeting operations like findOne , insertOne , and vectorSearch , marks a significant step toward simplifying this integration. While this initial release provides a solid foundation, the potential for further enhancements, such as direct connectors and real-time synchronization, promises to further empower developers. As demonstrated through the quick-start guide, setting up this powerful combination is straightforward, enabling developers to quickly harness the combined strength of these platforms. Ultimately, this integration fosters a more flexible and powerful development environment, enabling the creation of innovative, data-driven applications that meet the demands of modern users. Build your application with a pre-packaged solution using Firebase . Visit our product page to learn more about MongoDB Atlas .
Introducing MongoDB Atlas Service Accounts via OAuth 2.0
Authentication is a crucial aspect of interacting with the MongoDB Atlas Administration API , as it ensures that only authorized users or applications can access and manage resources within a MongoDB Atlas project. While MongoDB Atlas users currently have programmatic API keys (PAKs) as their primary authentication method, we recognize that development teams have varying authentication workflow requirements. To help developer teams meet these requirements, we’re excited to announce that Service Accounts via OAuth 2.0 for MongoDB Atlas is now generally available! MongoDB Atlas Service Accounts offer a more streamlined way of authenticating API requests for applications, enabling your developers to use their preferred authentication workflow. Addressing the challenges of using programmatic access keys At some point in your MongoDB Atlas journey, you have likely created PAKs. These API keys enable MongoDB Atlas project owners to authenticate access for their users. API keys include a public key and a private key. These two parts serve the same function as a username and a password when you make API requests to MongoDB Atlas. Each API key belongs to only one organization, but you can grant API keys access to any number of projects in that organization. PAKs use a method of authentication known as HTTP Digest, which is a challenge-response authentication mechanism that uses a hash function to securely transmit credentials without sending plaintext passwords over the network. MongoDB Atlas hashes the public key and the private key using a unique value called a nonce. The HTTP Digest authentication specifies that the nonce is only valid for a short amount of time. This is to prevent replay attacks so that you can’t cache a nonce and use it forever. It’s also why your API keys are a mix of random symbols, letters, and numbers and why you can only view a private key once. As a result, many teams must manage and rotate PAKs to maintain application access security. However, doing this across multiple applications can be cumbersome, especially for teams operating in complex environments. As a result, we’ve introduced support for an alternate authentication method through Service Accounts via OAuth 2.0, which enables users to take advantage of a more automated authentication method for application development. Using Service Accounts with an OAuth 2.0 client credentials flow OAuth 2.0 is a standard for interapplication authentication that relies on in-flight TLS encryption to secure its communication channels. This prevents unauthorized parties from intercepting or tampering with the data. The MongoDB Atlas Administration API supports in-flight TLS encryption and uses it to enable Service Accounts as an alternative method for authenticating users. MongoDB Atlas Service Accounts provide a form of OAuth 2.0 authentication that enables machine-to-machine communication. This enables applications, rather than users, to authenticate and access MongoDB Atlas resources. Authentication through Service Accounts follows the same access control model as PAKs, with full authentication lifecycle management. Service Accounts use the OAuth 2.0 client credentials flow, with MongoDB Atlas acting as both the identity provider and the authorization server. Like PAKs, Service Accounts are not tied to individual MongoDB Atlas users but are still ingrained with MongoDB Atlas. Figure 1. How it Works - MongoDB Atlas Service Accounts Experiencing benefits through Service Accounts Using Service Accounts to manage programmatic access offers a number of advantages: Automation Service Accounts offer an automated way to manage access. Users don’t need to manually manage authentication mechanisms, like recreating a Service Account to rotate the “client secrets.” Instead, they only need to regenerate the client secrets while keeping the other configuration of the existing Service Account intact. Furthermore, Service Accounts are broadly supported across many platforms, enabling easier integration between different services and tools and facilitating easier connections across applications and infrastructure components, regardless of the underlying technology. Seamless integration with MongoDB Atlas Service Accounts enable developers to manage authentication in the workflow of their choice. Users can manage the Service Account lifecycle at the organization and project levels via the MongoDB Atlas Administration API, the provided client library (currently, the Atlas Go SDK) , and the Atlas UI . They integrate with MongoDB Atlas via the OAuth 2.0 client credential flow, enabling seamless authentication using cloud-native identity systems. Granular access control and role management Service Accounts also have robust security features, providing a standardized and consistent way to manage access. Each organization or project can have its own Service Account, simplifying credential management and access control. Additionally, you can define granular roles for a Service Account to limit its access to only the necessary resources. This reduces the risk of over-permissioning and unauthorized access. Ready to uplevel your user authentication? Learn how to create your first Service Account by visiting our documentation . Not a MongoDB Atlas user yet? Sign up for free today.
LangChainGo and MongoDB: Powering RAG Applications in Go
MongoDB is excited to announce our integration with LangChainGo, making it easier to build Go applications powered by large language models (LLMs). This integration streamlines LLM-based application development by leveraging LangChainGo’s abstractions to simplify LLM orchestration, MongoDB’s vector database capabilities, and Go’s strengths as a performant, scalable, and easy-to-use production-ready language. With robust support for retrieval-augmented generation (RAG) and AI agents, MongoDB enables efficient knowledge retrieval, contextual understanding, and real-time AI-driven workflows. Read on to learn more about this integration and the advantages of using MongoDB as a vector database for AI/ML applications in Go. LangChainGo: Bringing LangChain to the Go ecosystem LangChain is an open-source framework that simplifies building LLM-powered applications. It offers tools and abstractions to integrate LLMs with diverse data sources, APIs, and workflows, supporting use cases like chatbots, document processing, and autonomous agents. While LangChain currently supports only Python and JavaScript, the need for a similar solution in the Go ecosystem led to the development of LangChainGo. LangChainGo is a community-driven, third-party port of the LangChain framework for the Go programming language. It allows Go developers to directly integrate LLMs into their Go applications, bringing the capabilities of the original LangChain framework into the Go ecosystem. LangChainGo enables users to embed data using various services, including OpenAI, Ollama, Mistral, and others. It also supports integration with a variety of vector stores, such as MongoDB. MongoDB’s role as an operational and vector database MongoDB excels as a unified data layer for AI applications with native vector search capabilities due to its simplicity, scalability, security, and rich set of features. With Atlas Vector Search built into the core database, there's no need to sync operational and vector data separately—everything stays in one place, saving time and reducing complexity when you develop AI-powered applications. You can easily combine semantic searches with metadata filters, graph lookups, aggregation pipelines, and even geo-spatial or lexical search, enabling powerful hybrid queries all within a single platform. MongoDB’s distributed architecture allows the usage of vector search to scale independently from the core database, ensuring optimized vector query performance and workload isolation for superior scalability. Plus, with enterprise-grade security and high availability, MongoDB provides the reliability and peace of mind you need to power your AI-driven applications at scale. MongoDB, Go, and AI/ML As the Go AI/ML landscape grows, MongoDB continues to drive innovation with its powerful vector search capabilities and LangChainGo integration, empowering developers to build RAG implementations and AI agents. This integration is powered by the MongoDB Go Driver , which supports vector search and allows developers to interact with MongoDB directly from their Go applications, streamlining development and reducing friction. Figure 1. RAG architecture with MongoDB and LangChainGo. While Python and JavaScript dominate the AI/ML ecosystem, Go’s AI/ML ecosystem is still emerging—yet its potential is undeniable. Go’s simplicity, scalability, runtime safety, concurrency, and single-binary deployment make it an ideal production-ready language for AI. With MongoDB’s powerful database and helpful learning resources, developers can seamlessly build next-generation AI solutions in Go. Ready to dive in? Explore the tutorials below to get started! Getting Started with MongoDB and LangChainGo MongoDB was added as a vector store in LangChainGo’s v0.1.13 release. It is packaged as mongovector , a component that enables developers to use MongoDB as a powerful vector store in LangChainGo. Usage guidance is provided through the mongovector-vectorstore-example , along with the in-depth tutorials linked below. Dive into this integration to unlock the full potential of Go AI applications with MongoDB. We’re excited for you to work with LangChainGo. Here are some tutorials to help you get started: Get Started with the LangChainGo Integration Retrieval-Augmented Generation (RAG) with Atlas Vector Search Build a Local RAG Implementation with Atlas Vector Search Get started with Atlas Vector Search (select Go from the dropdown menu)
Secure and Scale Data with MongoDB Atlas on Azure and Google Cloud
MongoDB is committed to simplifying the development of robust, data-driven applications—regardless of where the data resides. Today, we’re announcing two major updates that enhance the security, scalability, and flexibility of MongoDB Atlas across cloud providers. Private, secure connectivity with Azure Private Link for MongoDB Atlas Data Federation, Atlas Online Archive, and Atlas SQL Developers building on Microsoft Azure can now establish private, secure connections to MongoDB Atlas Data Federation , MongoDB Atlas Online Archive , and MongoDB Atlas SQL using Azure Private Link, enabling: End-to-end security: Reduce exposure to security risks by keeping sensitive data off the public internet. Low-latency performance: Ensure faster and more reliable access through direct, private connectivity. Scalability: Build applications that scale while maintaining secure, seamless data access. Imagine a financial services company that needs to run complex risk analysis across multiple data sources, including live transactional databases and archived records. With MongoDB Atlas Data Federation and Azure Private Link, the company can securely query and aggregate this data without exposing it to the public internet, helping it achieve compliance with strict regulatory standards. Similarly, an e-commerce company managing high volumes of customer orders and inventory updates can use MongoDB Atlas Online Archive to seamlessly move older transaction records to cost-effective storage—all while ensuring real-time analytics dashboards still have instant access to historical trends. With Azure Private Link, these applications benefit from secure, low-latency connections, enabling developers to focus on innovation instead of on managing complex networking and security policies. General availability of MongoDB Atlas Data Federation and Atlas Online Archive on Google Cloud Developers working with Google Cloud can now use MongoDB Atlas Data Federation and Atlas Online Archive, which are now generally available in GA. This empowers developers to: Query data across sources: Run a single query across live databases, cloud storage, and data lakes without complex extract, transform, and load (ETL) pipelines. Optimize storage costs: Automatically move infrequently accessed data to lower-cost storage while keeping it queryable with MongoDB Atlas Online Archive. Achieve multi-cloud flexibility: Run applications across Amazon Web Services (AWS), Azure, and Google Cloud without being locked in. For example, a media streaming service might store frequently accessed content metadata in a high-performance database while archiving older user activity logs in Google Cloud Storage. With MongoDB Atlas Data Federation, the streaming service can analyze both live and archived data in a single query, making it easier to surface personalized recommendations without complex ETL processes. For a healthcare analytics platform, keeping years’ worth of patient records in a primary database can be expensive. By using MongoDB Atlas Online Archive, the platform can automatically move older records to lower-cost storage—while still enabling fast access to historical patient data for research and reporting. These updates give developers more control over building and scaling in the cloud. Whether they need secure access on Azure or seamless querying and archiving on Google Cloud, MongoDB Atlas simplifies security, performance, and cost efficiency. These updates are now live! Log in to your MongoDB Atlas account to start exploring the possibilities today.