Introducing the Ability to Independently Scale Analytics Node Tiers for MongoDB Atlas
We’re excited to announce analytics node tiers for MongoDB Atlas! Analytics node tiers provide greater control and flexibility by allowing you to customize the exact infrastructure you need for your analytics workloads. Analytics node tiers provide control and flexibility Until now, analytics nodes in MongoDB’s Atlas clusters have used the same cluster tier as all other nodes. However, operational and analytical workloads can vary greatly in terms of resource requirements. Analytics node tiers allow you to enhance the performance of your analytics workloads by choosing the best tier size for your needs. This means you can choose an analytics node tier larger or smaller than the operational nodes in your cluster. This added level of customization ensures you achieve the performance required for both transactional and analytical queries — without the need to over- or under-provision your entire cluster for the sake of the analytical workload. Analytics node tiers are available in both Atlas and Atlas for Government . A standard replica set contains a primary node for reads and writes and two secondary nodes that are read only. Analytics nodes provide an additional read-only node that is dedicated to analytical reads. Choose a higher or lower analytics node tier based on your analytics needs Teams with large user bases using their BI dashboards may want to increase their analytics node tiers above that of their operational nodes. Choosing a higher tier can be useful when you have many users or require more memory to serve analytics needs. Scaling up the entire cluster tier would be costly, but scaling up just your analytics node tiers helps optimize the cost. Teams with inconsistent needs may want to decrease their analytics node tier below that of their operational nodes. The ability to set a lower tier gives you flexibility and cost savings when you have fewer users or analytics are not your top priority. With analytics node tiers, you get more discretion and control over how you manage your analytics workloads by choosing the appropriately sized tier for your analytics needs. Get started today by setting up a new cluster or adding an analytics node tier to any existing cluster. Check out our documentation to learn more.
MongoDB 6.0 Now Available!
MongoDB 6.0 is now available for download. This major release introduces improvements to existing features as well as new products to empower you to build faster, troubleshoot less, and cut out complexity from your workflows. Continuing with the theme of the developer data platform concept introduced at MongoDB World 2022 , MongoDB 6.0’s new and enhanced abilities help remove the need for outside platforms in your tech stacks or application architectures. This means less time managing fundamentally incompatible solutions and more time building applications and solutions. MongoDB 6.0 includes several feature upgrades, more integrations, support for a diverse range of scenarios, and much more. For instance, time series collections and change streams can now be used for additional use cases, such as geo-indexing or finding the before and after states of documents, respectively. Additionally, MongoDB 6.0 includes exciting new releases for security, analytics, search, and more. One innovative new capability is Queryable Encryption , a first-of-its-kind technology that allows you to efficiently query data even as it remains encrypted, only decrypting it when it’s made available to the user. To learn more about MongoDB 6.0, read “7 Big Reasons to Upgrade to MongoDB 6.0” and visit the MongoDB 6.0 homepage to learn more — and to upgrade now.
Change Streams in MongoDB 6.0 Support Pre- and Post-Image Retrieval, DDL operations, and more
Introduced with MongoDB 3.6, a MongoDB change stream is an API on top of the operations log (oplog) that allows users to subscribe their applications to data changes in a collection, database, or entire deployment. It makes it easy for teams to build event-driven applications or systems on MongoDB that capture and react to data changes in near real time — no middleware or database polling scripts required. For MongoDB 6.0, we have enhanced change streams with new functionality that addresses a wider range of use cases while improving performance. Change streams now allow users to easily retrieve the before and after state of an entire document — sometimes referred to as pre- and post-images, respectively — when a document is either updated or deleted. Let’s suppose that you are storing user sessions in a collection and using a time-to-live (TTL) index to delete sessions as they expire. You can now reference data in the deleted documents to provide more information to the end user about their session after the fact. Or maybe you need to send an updated version of the entire document to a downstream system each time there is a data change. Added support for retrieving the before and after states of a document greatly expands the use cases change streams can address. Prior to MongoDB 6.0, change streams only supported data manipulation language (DML) operations. Change streams in MongoDB 6.0 will now support data definition language (DDL) operations such as creating and dropping indexes and collections so you can react to database events in addition to data changes. Change streams are built on MongoDB’s aggregation framework, which gives teams the capability to not only capture and react to data changes, but also to filter and transform the associated notifications as needed. With MongoDB 6.0, change streams that leverage filtering will have those stages automatically pushed to the optimal position within a change stream pipeline, dramatically improving performance. We’re excited to announce these enhancements to change streams with MongoDB 6.0 and look forward to seeing and hearing about all the applications and systems you’ll build with this expanded feature set. To learn more, visit our docs .
MongoDB Atlas Expands in the Middle East
We’re proud to announce further expansion in the Middle East with the launch of MongoDB Atlas on AWS in the United Arab Emirates (UAE) region. MongoDB Atlas is now available in 22 AWS regions around the world, including eight Asia Pacific regions and three Middle East and Africa regions. The UAE region is an AWS Recommended Region , meaning it has three Availability Zones (AZ), bringing significant infrastructure to the Middle East. When you deploy a cluster in the UAE, Atlas automatically distributes replicas to the different AZs for higher availability. If there’s an outage in one zone, the Atlas cluster will automatically fail over to keep running in the other two. And you can also deploy multi-region clusters with the same automatic failover built-in. We’re delighted that — as with customers in Bahrain, Cape Town, and more — United Arab Emirates organizations will now be able to keep data in their own country, delivering low-latency performance and ensuring confidence in data locality. We’re confident our UAE customers in government, financial services, and utilities in particular will appreciate this capability as they build tools to improve citizens’ lives and better serve their local users. Get Started for Free Sign up for Our Events
Announcing Atlas Data Federation and Atlas Data Lake
Two years ago, we released the first iteration of Atlas Data Lake . Since then, we’ve helped customers combine data from various storage layers to feed downstream systems. But after years spent studying our customers’ experiences, we realized we hadn’t gone far enough. To truly unleash the genius in all our developers, we needed to add an economical cloud object storage solution with a rich MQL query experience to the world of Atlas. Today, we’re thrilled to announce that our new Atlas Data Federation and Atlas Data Lake offerings do just that. We now offer two complementary services, Atlas Data Federation (our existing query service formerly known as Atlas Data Lake) and our new and improved Atlas Data Lake (a fully managed analytic-oriented storage service). Together, these services (both in preview) provide flexible and versatile options for querying and transforming data across storage services, as well as a MongoDB-native analytic storage solution. With these tools, you can query across multiple clusters, move data into self managed cloud object storage for consumption by downstream services, query a workload-isolated inexpensive copy of cluster data, compare your cluster data across different points in time, and much, much more. In hearing from our customers about their experiences with Atlas Data Lake, we learned where they have struggled, as well as the features they’ve been looking for us to provide. With this in mind, we decided to shift the name of our current query federation service to Atlas Data Federation to better align with how customers see the service and are getting value. We’ve seen many customers benefit from the flexibility of a federated query engine service, including querying data across multiple clusters, databases, and collections, as well as exporting data to third-party systems. We also saw where our customers were struggling with data lakes. We heard them ask for a fully managed storage solution so they could achieve all of their analytic goals within Atlas. Specifically, customers wanted scalable storage that would provide high query performance at a low cost. Our new Data Lake provides a high-performance analytic object storage solution, allowing customers to query historical data with no additional formatting or maintenance work needed on their end. How it works Atlas Data Federation encompasses our existing Data Lake functionality with several new affordances. It continues to deliver the same power that it always has, with increased performance and efficiency. The new Atlas Data Lake will now allow you to create Data Lake pipelines (based on your Atlas Cluster backup schedules) and fields on which you can optimize queries. The service takes the following steps: On the selected schedule, a copy of your collection will be extracted from your Atlas backup with no impact to your cluster. During extraction, we build partition indexes based on the contents of your documents and the fields you’ve selected for optimization. These indexes allow your queries to be optimized by capturing the minimums and maximums (and other stats) of the records in each partition, letting you quickly find the relevant data for your queries. Finally, the underlying data lands in an analytic-oriented format inside of cloud object storage. This minimizes data scanned when you execute a query. Once a pipeline has run and a Data Lake dataset has been created, you can select it as a data source in our new Data Federation query experience. You can either set it as the source for a specific virtual collection in a Federated Database Instance or you can have your Federated Database Instance generate a collection name for each dataset that your pipeline has created. Amazingly, no part of this process will consume compute resources from your cluster — neither the export nor the querying of datasets. These datasets provide workload isolation and consistency for long-running analytic queries, a target for ETL jobs using the powerful $out to S3. This makes it easy to compare the state of your data over time. Advanced though this is, it’s only the beginning of the story. We’re committing to evolving the service, improving performance, adding more sources of data, and building new features. All of this will be based on the feedback you, the user, gives us. We can’t wait to see how you’ll use this powerful new tool and can’t wait to hear what you’d like to see next. Try Atlas Data Lake Today
Keeping Data in Sync Anywhere with Cluster-to-Cluster Sync
For over a decade, MongoDB users have been deploying clusters for some of their most important workloads. We work with customers running MongoDB in a variety of environments, but there are three main environments that we see customers using: Globally distributed cloud clusters (Atlas and self-managed): Enterprises have been successfully running cloud-based applications — in multiple zones and regions — for 10-plus years. More recently, the deployment of globally distributed multi-cloud data clusters has provided tremendous value and flexibility for modern applications. The last two years of the pandemic resulted in an accelerated proliferation of cloud data clusters to support new application services and workloads. On-premises clusters: Many leading companies and government institutions remain reliant on their on-premises systems for various reasons, including regulatory compliance, data governance, existing line-of-business application integrations, or legacy investments. Edge clusters: Organizations also distribute workloads to edge systems to bring enterprise applications closer to data sources, such as local edge servers ingesting sensor data from IoT devices. This proximity to data at its source can deliver substantial business benefits, including improved response times and faster insights. Keeping hybrid data clusters in sync is challenging Due to the diverse data origins and evolution of apps, maintaining data stores in hybrid environments — i.e., distributing data between different environments or distributing data between multiple clusters in a single environment — can be challenging. As application owners innovate and expand to new data environments, a big part of their success will depend on effective data synchronization between their clusters. Cluster data synchronization requires: Support for globally distributed hybrid data clusters . All cluster data must be synchronized between different types of clusters. Continuous synchronization . Support for a constant, nonstop stream of data that seamlessly flows across cluster deployments and is accessible by apps connecting to those different deployments. Resumability . The ability to pause and resume data synchronization from where you left off. The need for a hybrid, inter-cluster data sync By default, a MongoDB cluster allows you to natively distribute and synchronize data globally within a single cluster. We automate this intra-cluster movement of data using replica sets and sharded clusters . These two configurations let you replicate data across multiple zones, geographical regions, and even multi-cloud configurations. But there are occasions when users want to go beyond a single MongoDB cluster and synchronize data to a separate cluster (inter-cluster) configuration for use cases such as: Migrating to MongoDB Atlas Creating separate development and production environments Supporting DevOps strategies (e.g., blue-green deployments) Deploying dedicated analytics environments Meeting locality requirements for auditing and compliance Maintaining preparedness for a stressed exit (e.g., reverse cloud migration) Moving data to the edge Introducing Cluster-to-Cluster Sync We designed Cluster-to-Cluster Synchronization to solve the challenges of inter-cluster data synchronization. It provides you with continuous unidirectional data synchronization of two MongoDB clusters (source to destination) in the same or hybrid environments. With Cluster-to-Cluster Sync, you have full control of your synchronization process by deciding when to start, stop, pause, resume, or reverse the direction of synchronization. You can also monitor the progress of the synchronization in real time. Availability Cluster-to-Cluster Sync is now Generally Available as part of MongoDB 6.0. Currently, Cluster-to-Cluster Sync is compatible only with source and destination clusters that are running on MongoDB 6+. What's next? To get started with Cluster-to-Cluster Sync, you need mongosync , a downloadable and self-hosted tool that enables data movement between two MongoDB clusters. Get started today: Download Cluster-to-Cluster Sync Read the Cluster-to-Cluster Sync docs Learn more about Cluster-to-Cluster Sync
MongoDB Announces the New Atlas CLI
We are pleased to announce the release of the new MongoDB Atlas Command-Line Interface (CLI). The MongoDB Atlas CLI is the fastest way to create and manage an Atlas database, automate ongoing operations, and scale your deployment for the full application development lifecycle. The Atlas CLI gives users a streamlined experience for both onboarding and ongoing management of your Atlas database in the cloud. It gives you a unified and powerful control plane to manage and automate tasks around your cloud resources all from a single interface. The Atlas CLI provides helpful guardrails, such as intelligent autocomplete, so you can easily view all available commands and syntax, and reduce time spent looking up commands and fixing errors. And with the ability to automate repetitive management tasks like spinning up or pausing clusters, you can improve developer productivity and optimize your CI/CD pipelines with MongoDB. For new MongoDB Atlas customers, the Atlas CLI gives you the power to get started quickly and streamline the most complex database management jobs. With just two terminal commands, you can start to programmatically manage MongoDB databases, automate user creation, control network access, and much more. To get started using the Atlas CLI, use the following two commands: $ brew install mongodb-atlas-cli This command installs the Atlas CLI via the Homebrew package manager $ atlas setup This command launches an interactive wizard that lets you: Sign up and authenticate to Atlas Create a free forever MongoDB database hosted in the cloud Load sample data Create a database user and password Add your IP address to the access list Connect to the cluster using the MongoDB Shell mongosh In addition to installing via the Homebrew package manager, you can install the MongoDB Atlas CLI via apt-get, yum, and a direct download of installers and binaries. There’s so much more you can do with the Atlas CLI, including creating serverless instances on Atlas, managing Atlas Search indexes, and setting up Atlas Online Archive. To see a list of all the available commands with the Atlas CLI, enter the command $ atlas --help in the Atlas CLI. Take the Atlas CLI for a spin today! You can give us any feedback in UserVoice . To learn more about what you can do with the Atlas CLI, check out the documentation page .
Embrace the Benefits of Serverless Development With MongoDB Atlas
Today’s applications are expected to just work, regardless of time of day, user traffic, or where in the world they are being accessed. To achieve this level of performance and scale, developers need to ensure they have the proper infrastructure resources in place to handle user demand, which often leads to time wasted on non-differentiating work. Organizations that want to stay competitive and rapidly innovate must look for solutions that simplify the process and enable them to speed time to development. Enter serverless. What’s the big deal with serverless? Serverless technologies allow developers to build applications without thinking about resource provisioning and scaling. As a result, developers are increasingly adopting a serverless-first approach to application development as a means to move fast, optimize costs, and eliminate the operational overhead of deploying and managing infrastructure. With application demand and user expectations growing faster than ever, serverless is becoming an essential component of application modernization strategies for both emerging startups and enterprises alike, with more and more organizations beginning to adopt function-as-a-service (FaaS) solutions, popular serverless frameworks, and now even serverless databases. Atlas serverless instances now generally available With MongoDB Atlas , our mission is and always has been to empower developers to move fast and simplify how they build with data for any application. Newer developers don’t have time to learn the intricacies of deploying and managing databases, nor should they have to. Recognizing this shift, we have been focused on building a developer data platform that minimizes this challenge. We started by launching services like Atlas Functions and Atlas Triggers and then moved to the data layer, first adding auto-scaling, then releasing Atlas serverless instances, our serverless database deployment option, in public preview in July 2021. Today, we are excited to announce that serverless instances are now generally available (GA). With serverless instances, you can quickly deploy a database with minimal configuration—just choose your cloud provider and region, and get the full power of MongoDB with the benefits of the serverless model . Once you’ve deployed your database, the serverless instance will take care of the scaling for you, with the ability to scale up or down from zero without any cold starts, and will only charge you for the operations you run. What’s new in serverless instances With this GA release, serverless instances will now offer additional features, such as private networking with AWS PrivateLink, enhanced monitoring and alerting capabilities, and extended backup retention with point-in-time recovery. Also, serverless instances are now compatible to use with our other serverless cloud services, such as the Atlas Data API and Atlas Functions, making building end-to-end serverless applications even easier. We’ve also dropped our prices (up to 60% in certain regions), to improve usage costs, with tiered pricing for reads that gives you automatic discounts on your daily usage without any up-front commitments or the need to talk to a sales rep. With this model, you can scale your usage without the fear of surprises. Develop modern serverless applications of any scale with Atlas The MongoDB Atlas data platform lets you build modern applications of any scale. Unlike other serverless databases, Atlas provides the full power and flexibility of the document model, so you can structure data for a variety of different use cases, instead of being limited to only simple key/value workloads. Additionally, our unified query API allows you to run MongoDB anywhere with a consistent experience—whether it’s on your laptop, a dedicated cluster, or a serverless instance—without ever changing your app code. Already using other serverless solutions in your application stack today? Atlas connects seamlessly with other leading serverless tools—from FaaS, to app development platforms, and frameworks—so you can continue working with the solutions you already know and love. And, most importantly, serverless instances are hosted on the same reliable Atlas foundation that is already trusted by organizations of all sizes today, from disruptive startups to some of the world's largest enterprises. Get started today Serverless databases are incredibly flexible and we’ve seen them perform well for lightweight or infrequent application workloads, such as application development and testing, or QA environments, event-driven applications, and periodic cron jobs. Are you ready to give serverless instances a try? Deploy your first serverless database today to see just how easy it is to get a cloud data endpoint for your application. Create your first serverless database
Accelerate App Development by Integrating MongoDB Atlas with Vercel: Now Available on the Vercel Marketplace
We’re excited to announce that MongoDB Atlas is now available on the Vercel Integrations Marketplace . If you are already using Vercel to develop and ship applications, or considering it for an upcoming project, this integration enables you to add a fully managed MongoDB Atlas database to your Vercel application in a matter of minutes. Build new web experiences with ease Vercel is known for making it easy for frontend developers to deploy Next.js applications instantly with no configuration and seamless scale through built-in CI/CD, analytics, serverless functions, and content delivery at the edge. MongoDB Atlas complements Vercel with a fully managed multi-cloud database service built on an intuitive and flexible document data model that provides a frictionless getting started experience. Atlas offers several database deployment types, ranging from a free shared cluster that is great for exploring MongoDB, to serverless instances that are ideal for app development and lightweight workloads, to our dedicated clusters that offer advanced functionality and customizations to power the most mission-critical applications. When using Atlas with Vercel, developers can build new web experiences quickly and with ease. Deploy on Vercel with zero configuration and instantly start building with documents that map directly to objects in your code. Scale without limits with Atlas and Vercel As your application grows, Atlas is built to grow with you, allowing you to modify data schemas if requirements change, and to scale confidently with built-in defaults and best practices that ensure your application is performant and secure. Our developer data platform makes expanding to meet new workload requirements easy, with embedded capabilities for full-text search, real-time analytics, data visualization, and more, so you can get the most out of your data without the added complexity of additional tools. And if you’re planning to have users all over the world, that’s no problem. Atlas and Vercel make delivering first-class experiences easier, regardless of where your users are located. Take advantage of Vercel’s edge network and the ability to distribute your data globally on Atlas with the click of a button, with access to nearly 100 regions and features for data partitioning, multi-region, and multi-cloud deployments designed for resiliency and responsiveness. Get started today If you’re ready to start building your next application with MongoDB Atlas and Vercel, getting started is simple. Select MongoDB Atlas on the Vercel Integrations Marketplace and automatically create and link your Atlas database with your Vercel app project in just a few clicks. We’re excited to see what you build! Join our community forums to share your project, leave feedback, ask questions, and connect with other developers using MongoDB Atlas. Try the integration today
MongoDB Announces New Verified Solutions Program
MongoDB is pleased to announce the creation of our new Verified Solutions program, which empowers developers to use third-party tools alongside MongoDB with confidence. Though MongoDB offers a comprehensive, end-to-end developer data platform, some developers have bespoke needs that require the use of custom or third-party tools to complement MongoDB. Identifying the right third-party tools that will be reliable and performant — or dedicating the time and resources to build custom in-house solutions — can be a significant drain on developers’ time. That’s where the Verified Solutions program comes in. MongoDB Verified Solutions are third-party tools that are vetted and approved by MongoDB, giving developers the confidence to use them in production environments. Verified Solutions will also be regularly recertified to help prevent issues or breaking changes down the line. And, if developers do run into any trouble, they can get expert, frontline support directly from MongoDB (though we will not guarantee bug fixes). All of this means that developers can spend less time identifying, integrating, and resolving issues with third-party technologies and more time building best-in-class, differentiated applications. The inaugural offering in the Verified Solutions program will be Mongoose , a popular tool for object-modeling for MongoDB and Node.js applications. Thousands of developers love Mongoose for its schema-based data modeling, built-in type-casting, and help with validation, query building, business logic hooks, and more. As the Verified Solutions program grows, we look forward to adding more third-party tools to further empower developers building cutting-edge applications on MongoDB. To learn more, check out our Verified Solutions landing page .
MongoDB Atlas Data API Is Now Generally Available: Connectionless Data Access Over HTTPS
Today we’re excited to announce the general availability of the MongoDB Atlas Data API . The Data API is a serverless, secure API that brings the ease of HTTPS-based data access to the forefront of the Atlas developer’s experience. Traditionally, connecting to a database or integrating data into apps comes with a lot of operational burdens, such as provisioning infrastructure or scaling. The Data API offers a new, fully managed way to build data-centric apps and services on top of Atlas. Now Atlas developers can simply think of their data in terms of an API. Since we introduced the Data API in preview in November 2021, Atlas customers have been adopting it for a variety of use cases. For example, some customers are using it to connect to IoT environments where MongoDB drivers aren’t supported. Other customers are using the Data API as a way to quickly build a POC. Many organizations are using the Data API to integrate Atlas data with other cloud services, such as AWS Lambda, Microsoft Power Apps, or Apigee and edge-based web services such as Vercel and Cloudflare. Penny Software , a cloud-based procure-to-pay startup in EMEA, is already using the Data API in multiple parts of their application. “The Atlas Data API has been instrumental in our efforts to thin out our backend application,” CTO Mohamad Ibrahim says. "It has helped the team reach a new level of productivity." New features and functionalities with GA In addition to being production-ready, the Data API now supports new layers of configurable data permissioning and security, including: New authentication methods: We’ve added support for authentication methods, such as JWT authentication and email/password. Role-based access control: Configure rules for user roles that control read and write access via the API. IP Access List: Only allow client requests from the enabled entries in the IP access list. Custom endpoints: Define additional API routes—including the request method, the URL, and the logic—for additional configuration flexibility. Get started with the Atlas Data API If you’re ready to start building your next application with MongoDB Atlas and the Data API, getting started is easy—choose the cluster you’d like to connect to and generate an API key. That’s all it takes to set up and start sending requests to the API. Try the Data API Today
MongoDB Releases Queryable Encryption Preview
Today we are announcing the Preview release of Queryable Encryption , which allows customers to encrypt sensitive data from the client side, store it as fully randomized encrypted data on the database server side, and run expressive queries on the encrypted data. With the introduction of Queryable Encryption, MongoDB is the only database provider that allows customers to run expressive queries, such as equality (available now in preview) and range, prefix, suffix, substring, and more (coming soon) on fully randomized encrypted data. This is a huge advantage for organizations that need to run expressive queries while also confidently securing their data. Why is Queryable Encryption an important technology? With the proliferation of different types of data being transmitted and stored in the cloud, protecting data is increasingly important for companies. Enterprises with high-sensitivity workloads require additional technical options to control and limit access to confidential and regulated data. For many enterprise and federal customers, compliance obligations dictate that the sensitivity of certain workloads requires the separation of duties of personnel. For example, analysts at a stock brokerage firm may query to find clients and the number of shares, the broker can make stock transactions on behalf of the investor, and database administrators (DBAs) manage the data, while the sensitive and personally identifiable information (PII), such as social security number (SSN), should be completely hidden. Another important focus area for organizations is complying with data privacy and customer data protection mandates. This applies both to customers who use the data, and vendors who store the data for them. Data privacy regulations can involve complying with laws within and outside your industry that help protect sensitive data. Making sure that you are following all necessary measures to protect your customers’ most sensitive data is a process. Data protection and privacy are typically applied to high-sensitivity information, such as personal health information (PHI) and PII. Current state and challenges around data security Although existing encryption solutions (in-transit and at-rest) cover many regulatory use cases, none of them protects sensitive data while it is in use. In-use data encryption often is a requirement for high-sensitivity workloads for customers in financial services, healthcare, and critical infrastructure organizations. Currently, challenges around in-use encryption technologies include: In-use encryption is highly complex, involving custom code from the application side in order to encrypt, process, filter, and decrypt the data to show it to the users. It also involves managing encryption keys in order to encrypt/decrypt the data. Developers need cryptography experience in order to design a secure encryption solution. Current solutions have limited or no querying capabilities, which makes using encrypted data in applications difficult. Some of the existing tools, such as homomorphic encryption or secure enclaves have performance unsuited to scalable encrypted search, require proprietary hardware, or have uncertain security properties. Introducing Queryable Encryption Queryable Encryption removes operational heavy-lifting, resulting in faster app development without sacrificing data protection, compliance, and data privacy security requirements. Here is a sample flow of operations in which an authenticated user wants to query the data, but now the user is able to query on fully randomly encrypted data. In this example, let’s assume we are retrieving the SSN number of a user. When the application submits the query, MongoDB drivers first analyze the query. Recognizing the query is against an encrypted field, the driver requests the encryption keys from the customer-provisioned key provider, such as AWS Key Management Service (AWS KMS), Google Cloud KMS, Azure Key Vault, or any KMIP-enabled provider, such as HashiCorp Vault. The driver submits the query to the MongoDB server with the encrypted fields rendered as ciphertext. Queryable Encryption implements a fast, searchable scheme that allows the server to process queries on fully encrypted data, without knowing anything about the data. The data and the query itself remain encrypted at all times on the server. The MongoDB server returns the encrypted results of the query to the driver. The query results are decrypted with the keys held by the driver and returned to the client and shown as plaintext. Advantages of Queryable Encryption Rich querying capabilities on encrypted data: MongoDB is the only database provider that allows customers to run rich query expressions like range, equality, prefix, suffix, and more on encrypted data. (equality search is in the Preview release and the rest will follow in future releases) This is a huge advantage for customers as they can run expressive queries while securing their data confidently. Data encrypted throughout its lifecycle: Queryable Encryption adds another layer of security for your most sensitive data, where data remains secure in-transit, at-rest, in memory, in logs, and in backups. Additionally, Queryable Encryption encrypts data as fully randomized on the server-side. Strong technical controls for critical data privacy use cases: Strong technical controls allow customers to meet the strictest data privacy requirements for confidentiality and integrity using standards-based cryptography. Customers maintain control of encryption keys at all times, and data encryption/decryption happens only on the client-side. This guarantees that only authorized users with access to the client-side application and the encryption keys are able to see the plaintext data. These strong controls can help customers meet data privacy requirements mandated by HIPAA, GDPR, CCPA, and more. Faster application development: Developers don't need to be experts in cryptography to protect data with the highest levels of confidentiality and integrity. Unlike an SDK, where the wrong design choice could lead to weakened security, Queryable Encryption is a comprehensive encryption solution using standard-based cryptography and strong key management built-in. It is easy to set up and is supported on popular MongoDB drivers. Reduce institutional risk: Customers who are migrating to the cloud can confidently store their more sensitive data in MongoDB Atlas. Queryable Encryption allows customers to maintain control of their data while allowing rich, expressive querying capabilities on fully randomized encrypted data. MongoDB enables strong security defaults to ensure that security configurations such as authentication, authorization, in-transit and at-rest encryption are always on, to make it easy for customers to develop and focus on their business needs. Queryable Encryption adds another layer of security, which is a strong form of technical control enabling our customers to protect data throughout its lifecycle, and you’ll have the ability to run rich queries on the encrypted data. Advanced Cryptography Research Group Queryable Encryption was designed by MongoDB’s Advanced Cryptography Research Group, headed by Seny Kamara and Tarik Moataz, who are pioneers in the field of encrypted search. The Group conducts cutting-edge peer-reviewed research in cryptography and works with MongoDB engineering teams to transfer and deploy the latest innovations in cryptography and privacy to the MongoDB data platform. Resources For more information on Queryable Encryption, refer to the following resources: MongoDB’s Queryable Encryption MongoDB Documentation MongoDB Atlas Security Controls