Atlas supports deploying clusters onto Google Cloud Platform (GCP). This page provides reference material related to Atlas cluster deployments on Google Cloud.
Depending on your cluster tier, Atlas supports the following Google Cloud
regions. While all of the following regions support
some regions don't support Free or Shared-Tier clusters. A check mark
indicates support for Free or Shared-Tier clusters. The Atlas
API uses the corresponding Atlas Region.
Each Atlas cluster tier comes with a default set of resources. Atlas provides the following resource configuration options.
Storage size reflects the size of the server root volume. Atlas clusters deployed onto Google Cloud use SSD persistent storage .
The actual amount of RAM available to each cluster tier might be slightly less than the stated amount, due to memory that the kernel reserves.
The following cluster tiers are available:
Can use this tier for a multi-cloud cluster.
Unavailable in the following regions:
Atlas limits R-class instances to the following regions:
For purposes of management with the Atlas Administration API, cluster tier names that are prepended with
instead of an
R40 for example) run a low-CPU version of the cluster.
When creating or modifying a cluster with the API, be sure to specify
your desired cluster class by name with the
Low-CPU cluster tiers (R40, R50, R60, etc) are available in multi-cloud cluster configurations as long as the cluster tier is available for all the regions that the cluster uses.
Workloads typically require less than
2TB of storage.
Atlas configures the following resources automatically and does not allow user modification:
- Storage Speed
- Encrypted Storage Volumes
Storage speed is the number of input/output operations per second (IOPS)  that the system performs. This value is fixed at:
- 30 IOPS per GB for reads
- 30 IOPS per GB for writes, for a total of 60 IOPS per GB
For example, an
M30 cluster with 40 GB of default storage has a
maximum read speed of 1,200 IOPS and a maximum write speed of 1,200 IOPS.
If you increase the storage size to 100 GB per cluster, this increases
the maximum read speed by 3,000 IOPS and a maximum write speed by
Google Cloud storage volumes are always encrypted.
Each Google Cloud region includes a set number of independent zones. Each zone has power, cooling, networking, and control planes that are isolated from other zones.
For regions that have multiple zones, such as 2Z (for two zones) or 3Z (for 3 zones), Atlas deploys clusters across these zones.
The Atlas Add New Cluster form marks regions that support 3Z clusters as Recommended, as they provide higher availability.
For general information on Google Cloud regions and zones, see the Google documentation on regions and zones.
The number of zones in a region has no effect on the number of MongoDB nodes Atlas can deploy. MongoDB Atlas clusters are always made of replica sets with a minimum of three MongoDB nodes.
If the selected Google Cloud region has at least three zones, Atlas clusters are split across three zones. For example, a three node replica set cluster would have one node deployed onto each zone.
3Z clusters have higher availability compared to 2Z clusters. However, not all regions support 3Z clusters.
|||(1, 2) For detailed documentation on Google storage options, see Storage Options.|
Along with global region support, the following product integrations enable applications running on Google Cloud, such as Google Compute Engine, Google Cloud Functions, Google Cloud Run, and Google App Engine, to use Atlas instances easily and securely.
- Google Virtual Private Cloud (VPC): Set up network peering connections with GCP
- Google Identity: Sign up and log into Atlas with Google
Google Cloud Key Management Service (KMS):
- GCP Marketplace: Pay for Atlas usage via GCP
For more information on how to use Google Cloud with Atlas most effectively, review the following best practices, guides, and case studies:
- Case Study: Why build apps on a cloud-native database like MongoDB Atlas?
- Goolge Data Stream: Streamline your real-time data pipeline with Datastream and MongoDB