GIANT Stories at MongoDB

New to MongoDB Atlas — Full CRUD Support in Data Explorer

As a fully managed database service, MongoDB Atlas makes life simpler for anyone interacting with MongoDB, whether you’re deploying a cluster on demand, restoring a snapshot, evaluating real-time performance metrics, or inspecting data.

Today, we’re taking it one step further by allowing developers to manipulate their data right from within the Atlas UI. The embedded Data Explorer, which has historically allowed you to run queries, view metadata regarding your deployments, and retrieve information such as index usage statistics, now supports full CRUD functionality.

To support these capabilities, new Project-level roles with different permission levels have been added.

You can assign users these new roles in the Users and Teams settings.

In addition, all Data Explorer operations are tracked and presented in the Atlas Activity Feed (found in the Alerts menu for each Project), allowing you to see who did what, and when.

When you click into the Data Explorer in Atlas, you should see new controls for interacting with your documents, collections, databases, and indexes. For example, modify existing documents using the intuitive visual editor, or insert new documents and clone or delete existing ones in just a few clicks. A comprehensive list of available Data Explorer operations can be found in the Atlas documentation.

The Data Explorer is currently available for M10 Atlas clusters and higher.

New to MongoDB Atlas on AWS — AWS Cloud Provider Snapshots, Free Tier Now Available in Singapore & Mumbai


AWS Cloud Provider Snapshots

MongoDB Atlas is an automated cloud database service designed for agile teams who’d rather spend their time building apps than managing databases, backups, and restores. Today, we’re happy to announce that Cloud Provider Snapshots are now available for MongoDB Atlas replica sets on AWS. As the name suggests, Cloud Provider Snapshots provide fully managed backup storage and recovery using the native snapshot capabilities of the underlying cloud service provider.

Choosing a backup method for a database cluster in MongoDB Atlas

When this feature is enabled, MongoDB Atlas will perform snapshots against the primary in the replica set; snapshots are stored in the same cloud region as the primary, granting you control over where all your data lives. Please visit our documentation for more information on snapshot behavior.

Cloud Provider Snapshots on AWS have built-in incremental backup functionality, meaning that a new snapshot only saves the data that has changed since the previous one. This minimizes the time it takes to create a snapshot and lowers costs by reducing the amount of duplicate data. For example, a cluster with 10 GB of data on disk and 3 snapshots may require less than 30 GB of total snapshot storage, depending on how much of the data changed between snapshots.

Cloud Provider Snapshots are available for M10 clusters or higher in all of the 15 AWS regions where you can deploy MongoDB Atlas clusters.

Consider creating a separate Atlas project for database clusters where a different backup method is required. MongoDB Atlas only allows one backup method per project. Once you select a backup method — whether it’s Continuous Backup or Cloud Provider Snapshots — for a cluster in a project, Atlas locks the backup service to the chosen method for all subsequent clusters in that project. To change the backup method for the project, you must disable backups for all clusters in the project, then re-enable backups using your preferred backup methodology. Atlas deletes any stored snapshots when you disable backup for a cluster.


Free, $9, and $25 MongoDB Atlas clusters now available in Singapore & Mumbai

We’re committed to lowering the barrier to entry to MongoDB Atlas and allowing developers to build without worrying about database deployment or management. Last week, we released a 14% price reduction on all MongoDB Atlas clusters deployed in AWS Mumbai. And today, we’re excited to announce the availability of free and affordable database cluster sizes in South and Southeast Asia on AWS .

Free M0 Atlas clusters, which provide 512 MB of storage for experimentation and early development, can now be deployed in AWS Singapore and AWS Mumbai. If more space is required, M2 and M5 Atlas clusters, which provide 2 GB and 5 GB of storage, respectively, are now also available in these regions for just $9 and $25 per month.

MongoDB 4.0 Release Candidate 0 Has Landed

MongoDB enables you to meet the demands of modern apps with a technology foundation that enables you through:

  1. The document data model – presenting you the best way to work with data.
  2. A distributed systems design – allowing you to intelligently put data where you want it.
  3. A unified experience that gives you the freedom to run anywhere – future-proofing your work and eliminating vendor lock-in.

Building on the foundations above, MongoDB 4.0 is a significant milestone in the evolution of MongoDB, and we’ve just shipped the first Release Candidate (RC), ready for you to test.

Why is it so significant? Let’s take a quick tour of the key new features. And remember, you can learn about all of this and much more at MongoDB World'18 (June 26-27).

Multi-Document ACID Transactions

Previewed back in February, multi-document ACID transactions are part of the 4.0 RC. With snapshot isolation and all-or-nothing execution, transactions extend MongoDB ACID data integrity guarantees to multiple statements and multiple documents across one or many collections. They feel just like the transactions you are familiar with from relational databases, are easy to add to any application that needs them, and and don't change the way non-transactional operations are performed. With multi-document transactions it’s easier than ever for all developers to address a complete range of use cases with MongoDB, while for many of them, simply knowing that they are available will provide critical peace of mind that they can meet any requirement in the future. In MongoDB 4.0 transactions work within a replica set, and MongoDB 4.2 will support transactions across a sharded cluster*.

To give you a flavor of what multi-document transactions look like, here is a Python code snippet of the transactions API.

with client.start_session() as s:
    s.start_transaction():
    try:
        collection.insert_one(doc1, session=s)
        collection.insert_one(doc2, session=s)
    except:
        s.abort_transaction()
        raise
    s.commit_transaction()

And now, the transactions API for Java.

try (ClientSession clientSession = client.startSession()) {
          clientSession.startTransaction();
           try {
                   collection.insertOne(clientSession, docOne);
                   collection.insertOne(clientSession, docTwo);
                   clientSession.commitTransaction();
          } catch (Exception e) {
                   clientSession.abortTransaction();
           }
    }

Our path to transactions represents a multi-year engineering effort, beginning over 3 years ago with the integration of the WiredTiger storage engine. We’ve laid the groundwork in practically every part of the platform – from the storage layer itself to the replication consensus protocol, to the sharding architecture. We’ve built out fine-grained consistency and durability guarantees, introduced a global logical clock, refactored cluster metadata management, and more. And we’ve exposed all of these enhancements through APIs that are fully consumable by our drivers. We are feature complete in bringing multi-document transactions to replica sets, and 90% done on implementing the remaining features needed to deliver transactions across a sharded cluster.

Take a look at our multi-document ACID transactions web page where you can hear directly from the MongoDB engineers who have built transactions, review code snippets, and access key resources to get started.

Aggregation Pipeline Type Conversions

One of the major advantages of MongoDB over rigid tabular databases is its flexible data model. Data can be written to the database without first having to predefine its structure. This helps you to build apps faster and respond easily to rapidly evolving application changes. It is also essential in supporting initiatives such as single customer view or operational data lakes to support real-time analytics where data is ingested from multiple sources. Of course, with MongoDB’s schema validation, this flexibility is fully tunable, enabling you to enforce strict controls on data structure, type, and content when you need more control.

So while MongoDB makes it easy to ingest data without complex cleansing of individual fields, it means working with this data can be more difficult when a consuming application expects uniform data types for specific fields across all documents. Handling different data types pushes more complexity to the application, and available ETL tools have provided only limited support for transformations. With MongoDB 4.0, you can maintain all of the advantages of a flexible data model, while prepping data within the database itself for downstream processes.

The new $convert operator enables the aggregation pipeline to transform mixed data types into standardized formats natively within the database. Ingested data can be cast into a standardized, cleansed format and exposed to multiple consuming applications – such as the MongoDB BI and Spark connectors for high-performance visualizations, advanced analytics and machine learning algorithms, or directly to a UI. Casting data into cleansed types makes it easier for your apps to to process, sort, and compare data. For example, financial data inserted as a long can be converted into a decimal, enabling lossless and high precision processing. Similarly, dates inserted as strings can be transformed into the native date type.

When $convert is combined with over 100 different operators available as part of the MongoDB aggregation pipeline, you can reshape, transform, and cleanse your documents without having to incur the complexity, fragility, and latency of running data through external ETL processes.

Non-Blocking Secondary Reads

To ensure that reads can never return data that is not in the same causal order as the primary replica, MongoDB blocks readers while oplog entries are applied in batches to the secondary. This can cause secondary reads to have variable latency, which becomes more pronounced when the cluster is serving write-intensive workloads. Why does MongoDB need to block secondary reads? When you apply a sequence of writes to a document, then MongoDB is designed so that each of the nodes must show the writes in the same causal order. So if you change field "A" in a document and then change field "B", it is not possible to see that document with changed field "B" and not changed field "A". Eventually consistent systems suffer from this behavior, but MongoDB does not, and never has.

By taking advantage of storage engine timestamps and snapshots implemented for multi-document ACID transactions, secondary reads in MongoDB 4.0 become non-blocking. With non-blocking secondary reads, you now get predictable, low read latencies and increased throughput from the replica set, while maintaining a consistent view of data. Workloads that see the greatest benefits are those where data is batch loaded to the database, and those where distributed clients are accessing low latency local replicas that are geographically remote from the primary replica.

40% Faster Data Migrations

Very few of today’s workloads are static. For example, the launch of a new product or game, or seasonal reporting cycles can drive sudden spikes in load that can bring a database to its knees unless additional capacity can be quickly provisioned. If and when demand subsides, you should be able to scale your cluster back in, rightsizing for capacity and cost.

To respond to these fluctuations in demand, MongoDB enables you to elastically add and remove nodes from a sharded cluster in real time, automatically rebalancing the data across nodes in response. The sharded cluster balancer, responsible for evenly distributing data across the cluster, has been significantly improved in MongoDB 4.0. By concurrently fetching and applying documents, shards can complete chunk migrations up to 40% faster, allowing you to more quickly bring new nodes into service at just the moment they are needed, and scale back down when load returns to normal levels.

Extensions to Change Streams

Change streams, released with MongoDB 3.6, enable developers to build reactive, real-time, web, mobile, and IoT apps that can view, filter, and act on data changes as they occur in the database. Change streams enable seamless data movement across distributed database and application estates, making it simple to stream data changes and trigger actions wherever they are needed, using a fully reactive programming style.

With MongoDB 4.0, Change Streams can now be configured to track changes across an entire database or whole cluster. Additionally, change streams will now return a cluster time associated with an event, which can be used by the application to provide an associated wall clock time for the event.

Getting Started with MongoDB 4.0

Hopefully this gives you a taste of what’s coming in 4.0. There’s a stack of other stuff we haven’t covered today, but you can learn about it all in the resources below.

To get started with the RC now:

  1. Head over to the MongoDB download center to pick up the latest development build.
  2. Review the 4.0 release notes.
  3. Sign up for the forthcoming MongoDB University training on 4.0.

And you can meet our engineering team and other MongoDB users at MongoDB World'18 (June 26-27).

---

* Safe Harbor Statement

This blog post contains “forward-looking statements” within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. Such forward-looking statements are subject to a number of risks, uncertainties, assumptions and other factors that could cause actual results and the timing of certain events to differ materially from future results expressed or implied by the forward-looking statements. Factors that could cause or contribute to such differences include, but are not limited to, those identified our filings with the Securities and Exchange Commission. You should not rely upon forward-looking statements as predictions of future events. Furthermore, such forward-looking statements speak only as of the date of this presentation.

In particular, the development, release, and timing of any features or functionality described for MongoDB products remains at MongoDB’s sole discretion. This information is merely intended to outline our general product direction and it should not be relied on in making a purchasing decision nor is this a commitment, promise or legal obligation to deliver any material, code, or functionality. Except as required by law, we undertake no obligation to update any forward-looking statements to reflect events or circumstances after the date of such statements.

Optimizing for Fast, Responsive Reads with Cross-Region Replication in MongoDB Atlas

MongoDB Atlas customers can enable cross-region replication for multi-region fault tolerance and fast, responsive reads.

  • Improved availability guarantees can be achieved by distributing replica set members across multiple regions. These secondaries will participate in the automated election and failover process should the primary (or the cloud region containing the primary) go offline.
  • Read-only replica set members allow customers to optimize for local reads (minimize read latency) across different geographic regions using a single MongoDB deployment. These replica set members will not participate in the election and failover process and can never be elected to a primary replica set member.

In this post, we’ll dive a little deeper into optimizing for local reads using cross-region replication and walk you through the necessary configuration steps on an environment running on AWS.

Primer on read preference

Read preference determines how MongoDB clients route read operations to the members of a replica set. By default, an application directs its read operations to the replica set primary. By specifying the read preference, users can:

  • Enable local reads for geographically distributed users. Users from California, for example, can read data from a replica located locally for a more responsive experience
  • Allow read-only access to the database during failover scenarios

A read replica is simply an instance of the database that provides the replicated data from the oplog; clients will not write to a read replica.

With MongoDB Atlas, we can easily distribute read replicas across multiple cloud regions, allowing us to expand our application's data beyond the region containing our replica set primary in just a few clicks.

To enable local reads and increase the read throughput to our application, we simply need to modify the read preference via the MongoDB drivers.

Enabling read replicas in MongoDB Atlas

We can enable read replicas for a new or existing MongoDB paid cluster in the Atlas UI. To begin, we can click on the cluster “configuration” button and then find the link named "Enable cross-region configuration options."

When we click this, we’ll be presented with an option to select the type of cross-replication we want. We'll choose deploy read-only replicas:

As you can see above, we have our preferred region (the region containing our replica set primary) set to AWS, us-east-1 (Virginia) with the default three nodes. We can add regions to our cluster configuration based on where we think other users of our application might be concentrated. In this case, we will add additional nodes in us-west-1 (Northern California) and eu-west-1 (Ireland), providing us with read replicas to serve local users.

Note that all writes will still go to the primary in our preferred region, and reads from the secondaries in the regions we’ve added will be eventually consistent.

We’ll click "Confirm and Deploy", which will deploy our multi-region cluster.

Our default connection string will now include these read replicas. We can go to the "Connect" button and find our full connection string to access our cluster:

When the deployment of the cluster completes, we will be ready to distribute our application's data reads across multiple regions using the MongoDB drivers. We can specifically configure readPreference within our connection string to send clients to the "closest replicas". For example, the Node native MongoDB Driver permits us to specify our preference:

readPreference Specifies the replica set read preference for this connection.


The read preference values are the following:

For our app, if we want to ensure the read preference in our connection string is set to the nearest MongoDB replica, we would configure it as follows:

mongodb://admin:<PASSWORD>@cluster0-shard-00-00-bywqq.mongodb.net:27017,cluster0-shard-00-01-bywqq.mongodb.net:27017,cluster0-shard-00-02-bywqq.mongodb.net:27017,cluster0-shard-00-03-bywqq.mongodb.net:27017,cluster0-shard-00-04-bywqq.mongodb.net:27017/test?ssl=true&replicaSet=Cluster0-shard-0&authSource=admin?readPreference=nearest

Security and Connectivity (on AWS)

MongoDB Atlas allows us to peer our application server's VPC directly to our MongoDB Atlas VPC within the same region. This permits us to reduce the network exposure to the internet and allows us to use native AWS Security Groups or CIDR blocks. You can review how to configure VPC Peering here.

A note on VPCs for cross-region nodes:

At this time, MongoDB Atlas does not support VPC peering across regions. If you want to grant clients in one cloud region read or write access to database instances in another cloud region, you would need to permit the clients’ public IP addresses to access your database deployment via IP whitelisting.

With cross-region replication and read-only replicas enabled, your application will now be capable of providing fast, responsive access to data from any number of regions.


Get started today with a free 512 MB database managed by MongoDB Atlas here.

New to MongoDB Atlas — Live Migrate Sharded Clusters and Deployments Running MongoDB 2.6

Live migration in MongoDB Atlas enables users to import data from MongoDB deployments running in other environments and cut over to a fully managed database service, giving you industry best practice security by default, advanced features to streamline operations and performance optimization, and the latest versions of MongoDB.

Today, we’re introducing two new enhancements to MongoDB Atlas live migration that make it easier than ever for users to take advantage of the official cloud database service for MongoDB with minimal impact to their applications.

  • Previously, live migration could only be performed on a replica set running MongoDB version 3.0 and above. MongoDB Atlas now supports live migrations of replica sets running MongoDB 2.6, making it easier for users running older versions to transition to a fully managed service and a more recent version of the database software.
  • Live migrations will now also support sharded clusters, meaning that some of the world’s largest MongoDB workloads can now be moved to MongoDB Atlas with less effort and minimal impact to production applications.


Live migrate from MongoDB 2.6 to 3.2+

Upgrading to a new database version may seem like routine work for some, but, as battle-hardened IT operators know, can have complexities and require plenty of strategy and foresight.

Between all the applications and end users you have, the prospect of upgrading to a new release can be a major undertaking requiring significant planning. While some of our Enterprise and Community customers love to upgrade to the latest release as soon as possible to get new features and performance improvements, others take a more measured approach to upgrading.

To make upgrading easier, we are excited to announce that we have extended database version support for the live migration tool in MongoDB Atlas. MongoDB users running older versions of the database can now easily update to the latest versions of the database and migrate to the best way to run MongoDB in the cloud, all at the same time.

Using live migration, you can migrate from any MongoDB 2.6 replica set to a MongoDB 3.2+ cluster on Atlas. This requires no backend configuration, no downtime, and no upgrade scripting. Once our migration service is able to communicate with your database, it will do all the heavy lifting.

The migration service works by:

  • Performing a sync between your source database and a target database hosted in MongoDB Atlas
  • Syncing live data between your source database and the target database by tailing the oplog
  • Notifying you when it’s time to cut over to the MongoDB Atlas cluster

Given that you’re upgrading a critical part of your application, you do need to be wary of how your application’s compatibility with the database might change, and for that we recommend the following stages are included in your upgrade plan:

  • Upgrade your application to make use of the latest MongoDB drivers, and make any necessary code changes
  • Create a staging environment on MongoDB Atlas
  • Use the live migration tool to import your data from your existing MongoDB replica
  • Deploy a staging version of your updated application and connect it to your newly created MongoDB Atlas staging environment
  • Perform thorough functional and performance tests to ensure behavior is as expected
  • Re-use the live migration tool to import your production data when ready, and then perform the hard cutover in databases and application versions

Compatibility between source and destination cluster versions.


Live migrate sharded clusters

Until today, migrating a sharded MongoDB deployment with minimal downtime has been difficult. The live migration tool now makes this possible for customers looking to move their data layer into MongoDB Atlas.

When performing a live migration on a sharded cluster, we recommend that in addition to following the process listed above, that you also consider the following:

  • Our live migration service will need access to all your shards and config servers, in addition to your mongos servers
  • You can only migrate across like database versions e.g. 3.2 to 3.2, 3.4 to 3.4, etc.
  • You must migrate from and to the same number of shards
  • For full details on sharded live migrations, click here

Ready to migrate to MongoDB Atlas? Get started here.

New to MongoDB Atlas — Cloud Provider Snapshots on Azure, Expanded API for Snapshots and Restore Jobs

Leo Zheng
February 14, 2018
Release Notes, Cloud

One of the core components of MongoDB Atlas, the cloud database service for MongoDB, is the fully managed disaster recovery functionality. With continuous backups, you can take consistent, cluster-wide snapshots of sharded deployments and trigger point-in-time restores to satisfy demanding recovery point objectives (RPOs) from the business. Continuous backups also allow you to query backup snapshots to restore granular data in a fraction of the time it would take to restore an entire snapshot.

Today we’re making it even easier to manage your backups with an expanded Atlas API. Programmatically get metadata about your snapshots, delete them, or change their expiration. Trigger restore jobs and retrieve them. The MongoDB Atlas API allows you to incorporate the rich functionality of Atlas fully managed backups into workflows optimized for how you manage your IT resources.

Visit our documentation for more information.

Cloud Provider Snapshots for Azure

We are also introducing a new type of managed backup service for MongoDB Atlas, using the native snapshot capabilities of your cloud provider. With cloud provider snapshots, your backups will be stored in the same cloud region as your managed databases, granting you better governance over where all of your data lives.

For deployments using cross-region replication, your backups will be stored in your preferred region.

Compared to continuous backups, cloud provider snapshots allow for fast restores of snapshot images. Pricing, which varies slightly from region to region, is also lower.

Cloud provider snapshots are available today for replica sets on Microsoft Azure. Support for Amazon Web Services and Google Cloud Platform will be rolled out later this year.

If you’re considering switching backup methods (from continuous backup to cloud provider snapshots), consider creating a separate project in MongoDB Atlas. For each Atlas project, the first cluster you enable backups for will dictate the backup method for all subsequent clusters in the project. To change the backup method within the same the project, disable backups for all clusters in the project, then re-enable backups using your preferred backup methodology. MongoDB Atlas automatically deletes any stored snapshots when you disable backups for a cluster.


Not yet a MongoDB Atlas user? Create an account and get a free 512 MB database.

New to MongoDB Atlas: Availability across all Google Cloud Platform regions

Leo Zheng
January 11, 2018
Release Notes, Cloud

A wide variety of companies around the world, from innovators in the social media space to industry leaders in energy, are running MongoDB on Google Cloud Platform (GCP). Increasingly, these organizations are consuming MongoDB as a fully managed service with MongoDB Atlas, which boosts the productivity of teams that touch the database by reducing the operational overhead of setup, ongoing management, and performance optimization.

When MongoDB Atlas became available on GCP last June, users were able to run it in 4 regions: us-east1 (South Carolina), us-central1 (Iowa), asia-east1 (Taiwan), europe-west1 (Belgium). This week we’re excited to launch the service across all Google Cloud Platform regions, allowing you to easily deploy and run MongoDB near you.

Most GCP regions are made up of 3 isolated locations called zones where resources can be provisioned. MongoDB Atlas automatically distributes a 3-node replica set across the zones in a region, ensuring that the automated election and failover process can complete successfully if the zone containing the primary node becomes unavailable.

For Atlas deployments in GCP’s Singapore region, which contains 2 zones instead of 3, it’s recommended that users enable Atlas’s cross-region replication to obtain a similar level of redundancy.

Atlas is available across all GCP regions now. We’re excited to see what you build with MongoDB and Google services!

Not an Atlas user yet? Get started here.

New to MongoDB Atlas: Pause/Resume Clusters, M200 Instance Size on AWS

Leo Zheng
December 22, 2017
Release Notes, Cloud

MongoDB Atlas, the managed MongoDB service, now allows you to pause and restart your database clusters. This makes it easy and affordable for you to integrate MongoDB into DevOps workflows where always-on access to the underlying data is not required — e.g. development or testing.

When combined with Atlas’s fully managed backup service, this new functionality allows you to seamlessly create multiple environments for development and testing while keeping infrastructure and operational costs to a minimum.

For example, you could restore a subset of your production data (using queryable snapshots) to a smaller database to try out new features introduced in MongoDB 3.6. You can even restore to different Atlas Projects, regions, or clouds to give different members of your organization local access. And now with the pause cluster feature, your development and testing teams can easily stop any databases when they’re not being used.

Pausing and resuming a cluster requires just a few clicks in the Atlas UI or a single call with the Atlas API. When the cluster is paused, you are charged for provisioned storage and any associated backups, but not for compute instance hours associated with your Atlas cluster. Clusters can be paused for up to 7 days. If you do not resume a paused cluster within the 7 day window, Atlas will automatically resume the cluster.

The pause/resume feature is now available for all dedicated instance sizes (M10 and above) in every supported region on AWS, Microsoft Azure, and Google Cloud Platform.

Larger Max Instance Size on AWS (M200)

MongoDB Atlas now supports a larger instance size on Amazon Web Services. The new M200 clusters are designed for the most demanding production workloads and peak hours of activity. Each instance features 64 vCPUs, 256 GB of RAM, and 1500 GB of storage included, with 25 Gigabit network connectivity.

M200 instances are available in all 14 AWS regions supported by MongoDB Atlas.

Not an Atlas user yet? Get started with a 512MB database for free.

New to MongoDB Atlas: Performance Advisor, Auto-Expand Storage Capacity, Teams

MongoDB Atlas includes a set of monitoring capabilities that give your teams complete visibility into the performance of your databases, allowing you to anticipate issues and proactively take the necessary steps to ensure an optimal experience for your end customers. Important historical metrics are automatically highlighted in optimized dashboards. It’s easy to create and customize alerts that ping the endpoints you want when key metrics go out of range. You can also see what’s happening in your cluster as it happens with the real-time performance panel, which displays memory usage, network I/O, operations in flight, the hottest collections, and the slowest operations. This panel even allows you to kill slow-running operations with just a few clicks.

Real-time performance panel

Automated index suggestions with the new Performance Advisor

But what if instead of killing off those operations, you wanted a quick and easy way to see how to improve their runtime? That’s now easy with the new Performance Advisor, available for all dedicated MongoDB Atlas deployments. The Performance Advisor shows the different collections in your database that are experiencing suboptimal performance. Click on a specific collection and it will display existing indexes, examples of slow-running queries and relevant metrics, and most importantly, automatically generated index suggestions to help improve their performance.

New Performance Advisor, available for all dedicated MongoDB Atlas deployments

This new feature runs in the background with no impact to your existing deployments and ensures that you’re getting the most performance out of MongoDB with the resources you’ve provisioned.

Automatically expanding storage capacity

When you do need additional resources, MongoDB Atlas now makes that process easier to manage with automatic scaling for storage capacity. Enabled by default for all dedicated clusters (M10 instance size and above), auto-scaling for storage detects when your disks hit 90% utilization and provisions additional storage such that your cluster reaches a disk utilization of 70% on AWS & GCP, or a maximum of 70% utilization on Azure. This automated process occurs without impact to your database or application availability.

Simplified user management with Teams

MongoDB Atlas makes it easy to manage your database footprint with a simple hierarchy optimized for organizations made up of multiple business units. Projects contain MongoDB clusters; clusters in a project do not necessarily have to be in the same region. Organizations are made up of different projects that share the same billing settings. And today, we’re introducing teams, which will help simplify database user management. All users in a team will share the same access to a project. Teams can have access to multiple projects and users can belong to multiple teams.

Changelog

  • New disk size options for customers running on Microsoft Azure. 32GB, 64GB, 256GB, 512GB, 2TB, and 4TB disk sizes are now available.
  • Free tier is now also available in AWS Frankfurt (EU-Central-1)

Have feedback about a new feature or MongoDB Atlas? As always, we’d love to hear it at mongodb-atlas@mongodb.com.

Not an Atlas user yet? Get started with a 512MB database for free.

New to MongoDB Atlas: Cross-Region Replication, New Instance Sizes

Our automated database service, MongoDB Atlas, now serves thousands of customers across a wide range of industries, providing high availability, consistent performance, and simplified operations.