Explore Developer Center's New Chatbot! MongoDB AI Chatbot can be accessed at the top of your navigation to answer all your MongoDB questions.

Introducing MongoDB 8.0, the fastest MongoDB ever!
MongoDB Developer
Atlas
plus
Sign in to follow topics
MongoDB Developer Centerchevron-right
Developer Topicschevron-right
Productschevron-right
Atlaschevron-right

MongoDB Atlas With Terraform - Cluster and Backup Policies

SM
Samuel Molling22 min read • Published Sep 11, 2024 • Updated Sep 11, 2024
TerraformAtlas
Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
In this tutorial, I will show you how to create a MongoDB cluster in Atlas using Terraform. We saw in a previous article how to create an API key to start using Terraform and create our first project module. Now, we will go ahead and create our first cluster. If you don't have an API key and a project, I recommend you look at the previous article.
This article is for anyone who intends to use or already uses infrastructure as code (IaC) on the MongoDB Atlas platform or wants to learn more about it.
Everything we do here is contained in the provider/resource documentation: mongodbatlas_advanced_cluster | Resources | mongodb/mongodbatlas | Terraform
Note: We will not use a backend file. However, for productive implementations, it is extremely important and safer to store the state file in a remote location such as S3, GCS, Azurerm, etc.

Creating a cluster

At this point, we will create our first replica set cluster using Terraform in MongoDB Atlas. As discussed in the previous article, Terraform is a powerful infrastructure-as-code tool that allows you to manage and provision IT resources in an efficient and predictable way. By using it in conjunction with MongoDB Atlas, you can automate the creation and management of database resources in the cloud, ensuring a consistent and reliable infrastructure.
Before we begin, make sure that all the prerequisites mentioned in the previous article are properly configured: Install Terraform, create an API key in MongoDB Atlas, and set up a project in Atlas. These steps are essential to ensure the success of creating your replica set cluster.

Terraform provider configuration for MongoDB Atlas

The first step is to configure the Terraform provider for MongoDB Atlas. This will allow Terraform to communicate with the MongoDB Atlas API and manage resources within your account. Add the following block of code to your provider.tf file: 
1provider "mongodbatlas" {}
In the previous article, we configured the Terraform provider by directly entering our public and private keys. Now, in order to adopt more professional practices, we have chosen to use environment variables for authentication. The MongoDB Atlas provider, like many others, supports several authentication methodologies. The safest and most recommended option is to use environment variables. This implies only defining the provider in our Terraform code and exporting the relevant environment variables where Terraform will be executed, whether in the terminal, as a secret in Kubernetes, or a secret in GitHub Actions, among other possible contexts. There are other forms of authentication, such as using MongoDB CLI, AWS Secrets Manager, directly through variables in Terraform, or even specifying the keys in the code. However, to ensure security and avoid exposing our keys in accessible locations, we opt for the safer approaches mentioned.

Creating the Terraform version file

Inside the versions.tf file, you will start by specifying the version of Terraform that your project requires. This is important to ensure that all users and CI/CD environments use the same version of Terraform, avoiding possible incompatibilities or execution errors. In addition to defining the Terraform version, it is equally important to specify the versions of the providers used in your project. This ensures that resources are managed consistently. For example, to set the MongoDB Atlas provider version, you would add a required_providers block inside the Terraform block, as shown below:
1terraform {
2  required_version = ">= 0.12"
3  required_providers {
4    mongodbatlas = {
5      source = "mongodb/mongodbatlas"
6      version = "1.14.0"
7    }
8  }
9}

Defining the cluster resource

After configuring the version file and establishing the Terraform and provider versions, the next step is to define the cluster resource in MongoDB Atlas. This is done by creating a .tf file, for example main.tf, where you will specify the properties of the desired cluster. As we are going to make a module that will be reusable, we will use variables and default values so that other calls can create clusters with different architectures or sizes, without having to write a new module.
I will look at some attributes and parameters to make this clear.
1# ------------------------------------------------------------------------------
2# MONGODB CLUSTER
3# ------------------------------------------------------------------------------
4resource "mongodbatlas_advanced_cluster" "default" {
5  project_id = data.mongodbatlas_project.default.id
6  name = var.name
7  cluster_type = var.cluster_type
8  backup_enabled = var.backup_enabled
9  pit_enabled = var.pit_enabled
10  mongo_db_major_version = var.mongo_db_major_version
11  disk_size_gb = var.disk_size_gb
12  
In this first block, we are specifying the name of our cluster through the name parameter, its type (which can be a REPLICASET, SHARDED, or GEOSHARDED), and if we have backup and point in time activated, in addition to the database version and the amount of storage for the cluster.
1  advanced_configuration {
2    fail_index_key_too_long = var.fail_index_key_too_long
3    javascript_enabled = var.javascript_enabled
4    minimum_enabled_tls_protocol = var.minimum_enabled_tls_protocol
5    no_table_scan = var.no_table_scan
6    oplog_size_mb = var.oplog_size_mb
7    default_read_concern = var.default_read_concern
8    default_write_concern = var.default_write_concern
9    oplog_min_retention_hours = var.oplog_min_retention_hours
10    transaction_lifetime_limit_seconds = var.transaction_lifetime_limit_seconds
11    sample_size_bi_connector = var.sample_size_bi_connector
12    sample_refresh_interval_bi_connector = var.sample_refresh_interval_bi_connector
13}
Here, we are specifying some advanced settings. Many of these values will not be specified in the .tfvars as they have default values in the variables.tf file.
Parameters include the type of read/write concern, oplog size in MB, TLS protocol, whether JavaScript will be enabled in MongoDB, and transaction lifetime limit in seconds. no_table_scan is for when the cluster disables the execution of any query that requires a collection scan to return results, when true. There are more parameters that you can look at in the documentation, if you have questions.
1  replication_specs {
2    num_shards = var.cluster_type == "REPLICASET" ? null : var.num_shards
3    
4    dynamic "region_configs" {
5      for_each = var.region_configs
6
7      content {
8        provider_name = region_configs.value.provider_name
9        priority = region_configs.value.priority
10        region_name = region_configs.value.region_name
11
12        electable_specs {
13          instance_size = region_configs.value.electable_specs.instance_size
14          node_count = region_configs.value.electable_specs.node_count
15          disk_iops = region_configs.value.electable_specs.instance_size == "M10" || region_configs.value.electable_specs.instance_size == "M20" ? null :            region_configs.value.electable_specs.disk_iops
16          ebs_volume_type = region_configs.value.electable_specs.ebs_volume_type
17}
18
19        auto_scaling {
20          disk_gb_enabled = region_configs.value.auto_scaling.disk_gb_enabled
21          compute_enabled = region_configs.value.auto_scaling.compute_enabled
22          compute_scale_down_enabled = region_configs.value.auto_scaling.compute_scale_down_enabled
23          compute_min_instance_size = region_configs.value.auto_scaling.compute_min_instance_size
24          compute_max_instance_size = region_configs.value.auto_scaling.compute_max_instance_size
25}
26
27        analytics_specs {
28          instance_size = try(region_configs.value.analytics_specs.instance_size, "M10")
29          node_count = try(region_configs.value.analytics_specs.node_count, 0)
30          disk_iops = try(region_configs.value.analytics_specs.disk_iops, null)
31         ebs_volume_type = try(region_configs.value.analytics_specs.ebs_volume_type, "STANDARD")
32}
33
34       analytics_auto_scaling {
35         disk_gb_enabled = try(region_configs.value.analytics_auto_scaling.disk_gb_enabled, null)
36         compute_enabled = try(region_configs.value.analytics_auto_scaling.compute_enabled, null)
37         compute_scale_down_enabled = try(region_configs.value.analytics_auto_scaling.compute_scale_down_enabled, null)
38         compute_min_instance_size = try(region_configs.value.analytics_auto_scaling.compute_min_instance_size, null)
39         compute_max_instance_size = try(region_configs.value.analytics_auto_scaling.compute_max_instance_size, null)
40}
41
42       read_only_specs {
43         instance_size = try(region_configs.value.read_only_specs.instance_size, "M10")
44         node_count = try(region_configs.value.read_only_specs.node_count, 0)
45         disk_iops = try(region_configs.value.read_only_specs.disk_iops, null)
46         ebs_volume_type = try(region_configs.value.read_only_specs.ebs_volume_type, "STANDARD")
47}
48}
49}
50}
At this moment, we are placing the number of shards we want, in case our cluster is not a REPLICASET. In addition, we specify the configuration of the cluster, region, cloud, priority for failover, autoscaling, electable, analytics, and read-only node configurations, in addition to its autoscaling configurations.
1  dynamic "tags" {
2    for_each = local.tags
3    content {
4     key = tags.key
5     value = tags.value
6}
7}
8
9  bi_connector_config {
10    enabled = var.bi_connector_enabled
11    read_preference = var.bi_connector_read_preference
12}
13
14  lifecycle {
15    ignore_changes = [
16    disk_size_gb,
17    ]
18}
19}
Next, we create a dynamic block to loop for each tag variable we include. In addition, we specify the BI connector, if desired, and the lifecycle block. Here, we are only specifying disk_size_gb for an example, but it is recommended to read the documentation that has important warnings about this block, such as including instance_size, as autoscaling can change and you don't want to accidentally retire an instance during peak times.
1# ------------------------------------------------------------------------------
2# MONGODB BACKUP SCHEDULE
3# ------------------------------------------------------------------------------
4resource "mongodbatlas_cloud_backup_schedule" "default" {
5project_id = data.mongodbatlas_project.default.id
6cluster_name = mongodbatlas_advanced_cluster.default.name
7update_snapshots = var.update_snapshots
8reference_hour_of_day = var.reference_hour_of_day
9reference_minute_of_hour = var.reference_minute_of_hour
10restore_window_days = var.restore_window_days
11
12policy_item_hourly {
13 frequency_interval = var.policy_item_hourly_frequency_interval
14 retention_unit = var.policy_item_hourly_retention_unit
15 retention_value = var.policy_item_hourly_retention_value
16}
17
18policy_item_daily {
19 frequency_interval = var.policy_item_daily_frequency_interval
20 retention_unit = var.policy_item_daily_retention_unit
21 retention_value = var.policy_item_daily_retention_value
22}
23
24policy_item_weekly {
25 frequency_interval = var.policy_item_weekly_frequency_interval
26 retention_unit = var.policy_item_weekly_retention_unit
27 retention_value = var.policy_item_weekly_retention_value
28}
29
30policy_item_monthly {
31 frequency_interval = var.policy_item_monthly_frequency_interval
32 retention_unit = var.policy_item_monthly_retention_unit
33 retention_value = var.policy_item_monthly_retention_value
34}
35}
Finally, we create the backup block, which contains the policies and settings regarding the backup of our cluster.
This module, while detailed, encapsulates the full functionality offered by the mongodbatlas_advanced_cluster and mongodbatlas_cloud_backup_schedule resources, providing a comprehensive approach to creating and managing clusters in MongoDB Atlas. It supports the configuration of replica set, sharded, and geosharded clusters, meeting a variety of scalability and geographic distribution needs.
One of the strengths of this module is its flexibility in configuring backup policies, allowing fine adjustments that precisely align with the requirements of each database. This is essential to ensure resilience and effective data recovery in any scenario. Additionally, the module comes with vertical scaling enabled by default, in addition to offering advanced storage auto-scaling capabilities, ensuring that the cluster dynamically adjusts to the data volume and workload.
To complement the robustness of the configuration, the module allows the inclusion of analytical nodes and read-only nodes, expanding the possibilities of using the cluster for scenarios that require in-depth analysis or intensive read operations without impacting overall performance.
The default configuration includes smart preset values, such as the MongoDB version, which is set to "7.0" to take advantage of the latest features while maintaining the option to adjust to specific versions as needed. This “best practices” approach ensures a solid starting point for most projects, reducing the need for manual adjustments and simplifying the deployment process.
Additionally, the ability to deploy clusters in any region and cloud provider — such as AWS, Azure, or GCP — offers unmatched flexibility, allowing teams to choose the best solution based on their cost, performance, and compliance preferences.
In summary, this module not only facilitates the configuration and management of MongoDB Atlas clusters with an extensive range of options and adjustments but also promotes secure and efficient configuration practices, making it a valuable tool for developers and database administrators in implementing scalable and reliable data solutions in the cloud.
The use of the lifecycle directive with the ignore_changes option in the Terraform code was specifically implemented to accommodate manual upscale situations of the MongoDB Atlas cluster, which should not be automatically reversed by Terraform in subsequent executions. This approach ensures that, after a manual increase in storage capacity (disk_size_gb) or other specific replication configurations (replication_specs), Terraform does not attempt to undo these changes to align the resource state with the original definition in the code. Essentially, it allows configuration adjustments made outside of Terraform, such as an upscale to optimize performance or meet growing demands, to remain intact without being overwritten by future Terraform executions, ensuring operational flexibility while maintaining infrastructure management as code.
In the variable.tf file, we create variables with default values:
1variable "name" {
2description = "The name of the cluster."
3type = string
4}
5
6variable "cluster_type" {
7description = <<HEREDOC
8Optional - Specifies the type of the cluster that you want to modify. You cannot convert
9a sharded cluster deployment to a replica set deployment. Accepted values include:
10REPLICASET for Replica set, SHARDED for Sharded cluster, and GEOSHARDED for Global Cluster
11HEREDOC
12default = "REPLICASET"
13}
14
15
16variable "mongo_db_major_version" {
17description = <<HEREDOC
18Optional - Version of the cluster to deploy. Atlas supports the following MongoDB versions
19for M10+ clusters: 5.0, 6.0 or 7.0.
20HEREDOC
21default = "7.0"
22}
23
24variable "version_release_system" {
25description = <<HEREDOC
26Optional - Release cadence that Atlas uses for this cluster. This parameter defaults to LTS.
27If you set this field to CONTINUOUS, you must omit the mongo_db_major_version field. Atlas accepts:
28CONTINUOUS - Atlas deploys the latest version of MongoDB available for the cluster tier.
29LTS - Atlas deploys the latest Long Term Support (LTS) version of MongoDB available for the cluster tier.
30HEREDOC
31default = "LTS"
32}
33
34
35
36variable "disk_size_gb" {
37description = <<HEREDOC
38Optional - Capacity, in gigabytes, of the host’s root volume. Increase this
39number to add capacity, up to a maximum possible value of 4096 (i.e., 4 TB). This value must
40be a positive integer. If you specify diskSizeGB with a lower disk size, Atlas defaults to
41the minimum disk size value. Note: The maximum value for disk storage cannot exceed 50 times
42the maximum RAM for the selected cluster. If you require additional storage space beyond this
43limitation, consider upgrading your cluster to a higher tier.
44HEREDOC
45type = number
46default = 10
47}
48
49variable "backup_enabled" {
50description = <<HEREDOC
51Optional - Flag indicating if the cluster uses Cloud Backup for backups. If true, the cluster
52uses Cloud Backup for backups. The default is true.
53HEREDOC
54type = bool
55default = true
56}
57
58variable "pit_enabled" {
59description = <<HEREDOC
60Optional - Flag that indicates if the cluster uses Continuous Cloud Backup. If set to true,
61backup_enabled must also be set to true. The default is true.
62HEREDOC
63type = bool
64default = true
65}
66
67variable "disk_gb_enabled" {
68description = <<HEREDOC
69Optional - Specifies whether disk auto-scaling is enabled. The default is true.
70HEREDOC
71type = bool
72default = true
73}
74
75variable "region_configs" {
76description = <<HEREDOC
77Required - Physical location of the region. Each regionsConfig document describes
78the region’s priority in elections and the number and type of MongoDB nodes Atlas
79deploys to the region. You can be set that parameters:
80
81- region_name - Optional - Physical location of your MongoDB cluster. The region you choose can affect network latency for clients accessing your databases.
82
83- electable_nodes - Optional - Number of electable nodes for Atlas to deploy to the region. Electable nodes can become the primary and can facilitate local reads. The total number of electableNodes across all replication spec regions must total 3, 5, or 7. Specify 0 if you do not want any electable nodes in the region. You cannot create electable nodes in a region if priority is 0.
84
85- priority - Optional - Election priority of the region. For regions with only read-only nodes, set this value to 0. For regions where electable_nodes is at least 1, each region must have a priority of exactly one (1) less than the previous region. The first region
86must have a priority of 7. The lowest possible priority is 1. The priority 7 region identifies the Preferred Region of the cluster. Atlas places the primary node in the Preferred Region. Priorities 1 through 7 are exclusive - no more than one region per cluster can be assigned a given priority. Example: If you have three regions, their priorities would be 7, 6, and 5 respectively. If you added two more regions for supporting electable nodes, the priorities of those regions would be 4 and 3 respectively.
87
88- read_only_nodes - Optional - Number of read-only nodes for Atlas to deploy to the region.
89Read-only nodes can never become the primary, but can facilitate local-reads. Specify 0 if you do not want any read-only nodes in the region.
90
91- analytics_nodes - Optional - The number of analytics nodes for Atlas to deploy to the region.
92Analytics nodes are useful for handling analytic data such as reporting queries from BI Connector for Atlas. Analytics nodes are read-only, and can never become the primary. If you do not specify this option, no analytics nodes are deployed to the region.
93HEREDOC
94type = any
95}
96
97# ------------------------------------------------------------------------------
98# MONGODB BI CONNECTOR
99# ------------------------------------------------------------------------------
100
101variable "bi_connector_enabled" {
102description = <<HEREDOC
103Optional - Specifies whether or not BI Connector for Atlas is enabled on the cluster.
104Set to true to enable BI Connector for Atlas. Set to false to disable BI Connector for Atlas.
105HEREDOC
106type = bool
107default = false
108}
109
110variable "bi_connector_read_preference" {
111description = <<HEREDOC
112Optional - Specifies the read preference to be used by BI Connector for Atlas on the cluster.
113Each BI Connector for Atlas read preference contains a distinct combination of readPreference and readPreferenceTags options. For details on BI Connector for Atlas read preferences, refer to the BI Connector Read Preferences Table.
114Set to "primary" to have BI Connector for Atlas read from the primary. Set to "secondary" to have BI Connector for Atlas read from a secondary member. Default if there are no analytics nodes in the cluster. Set to "analytics" to have BI Connector for Atlas read from an analytics node. Default if the cluster contains analytics nodes.
115HEREDOC
116type = string
117default = "secondary"
118}
119
120# ------------------------------------------------------------------------------
121# MONGODB ADVANCED CONFIGURATION
122# ------------------------------------------------------------------------------
123variable "fail_index_key_too_long" {
124description = <<HEREDOC
125Optional - When true, documents can only be updated or inserted if, for all indexed fields on the target collection, the corresponding index entries do not exceed 1024 bytes. When false, mongod writes documents that exceed the limit but does not index them.
126HEREDOC
127type = bool
128default = false
129}
130
131variable "javascript_enabled" {
132description = <<HEREDOC
133Optional - When true, the cluster allows execution of operations that perform server-side executions of JavaScript. When false, the cluster disables execution of those operations.
134HEREDOC
135type = bool
136default = true
137}
138
139variable "minimum_enabled_tls_protocol" {
140description = <<HEREDOC
141Optional - Sets the minimum Transport Layer Security (TLS) version the cluster accepts for incoming connections. Valid values are: TLS1_0, TLS1_1, TLS1_2. The default is "TLS1_2".
142HEREDOC
143default = "TLS1_2"
144}
145
146variable "no_table_scan" {
147description = <<HEREDOC
148Optional - When true, the cluster disables the execution of any query that requires a collection scan to return results. When false, the cluster allows the execution of those operations.
149HEREDOC
150type = bool
151default = false
152}
153
154variable "oplog_size_mb" {
155description = <<HEREDOC
156Optional - The custom oplog size of the cluster.
157Without a value that indicates that the cluster uses the default oplog size calculated by Atlas.
158HEREDOC
159type = number
160default = null
161}
162
163variable "default_read_concern" {
164description = <<HEREDOC
165Optional - The default read concern for the cluster. The default is "local".
166HEREDOC
167default = "local"
168}
169
170variable "default_write_concern" {
171description = <<HEREDOC
172Optional - The default write concern for the cluster. The default is "majority".
173HEREDOC
174default = "majority"
175}
176
177variable "oplog_min_retention_hours" {
178description = <<HEREDOC
179Minimum retention window for cluster's oplog expressed in hours.
180A value of null indicates that the cluster uses the default minimum oplog window that MongoDB Cloud calculates.
181HEREDOC
182type = number
183default = null
184}
185
186variable "transaction_lifetime_limit_seconds" {
187description = <<HEREDOC
188Optional - Lifetime, in seconds, of multi-document transactions. Defaults to 60 seconds.
189HEREDOC
190type = number
191default = 60
192}
193
194variable "sample_size_bi_connector" {
195description = <<HEREDOC
196Optional - Number of documents per database to sample when gathering schema information. Defaults to 100.
197Available only for Atlas deployments in which BI Connector for Atlas is enabled.
198HEREDOC
199type = number
200default = 100
201}
202
203variable "sample_refresh_interval_bi_connector" {
204description = <<HEREDOC
205Optional - Interval in seconds at which the mongosqld process re-samples data to create its relational schema. The default value is 300.
206The specified value must be a positive integer.
207Available only for Atlas deployments in which BI Connector for Atlas is enabled.
208HEREDOC
209type = number
210default = 300
211}
212
213# ------------------------------------------------------------------------------
214# MONGODB REPLICATION SPECS
215# ------------------------------------------------------------------------------
216variable "num_shards" {
217description = <<HEREDOC
218Optional - Number of shards, minimum 1.
219The default is null if type is REPLICASET.
220HEREDOC
221type = number
222default = null
223}
224
225# ------------------------------------------------------------------------------
226# MONGODB BACKUP POLICY
227# ------------------------------------------------------------------------------
228variable "update_snapshots" {
229description = <<HEREDOC
230Optional - Specify true to apply the retention changes in the updated backup policy to snapshots that Atlas took previously.
231HEREDOC
232type = bool
233default = false
234}
235
236variable "reference_hour_of_day" {
237description = <<HEREDOC
238Optional - Hour of the day in UTC at which Atlas takes the daily snapshots of the cluster.
239HEREDOC
240type = number
241default = 3
242}
243
244variable "reference_minute_of_hour" {
245description = <<HEREDOC
246Optional - Minute of the hour in UTC at which Atlas takes the daily snapshots of the cluster.
247HEREDOC
248type = number
249default = 30
250}
251
252variable "restore_window_days" {
253description = <<HEREDOC
254Optional - Number of days Atlas retains the backup snapshots in the snapshot schedule.
255HEREDOC
256type = number
257default = 3
258}
259
260variable "policy_item_hourly_frequency_interval" {
261description = <<HEREDOC
262Optional - Interval, in hours, between snapshots that Atlas takes of the cluster.
263HEREDOC
264type = number
265default = 12
266}
267
268variable "policy_item_hourly_retention_unit" {
269description = <<HEREDOC
270Optional - Unit of time that Atlas retains each snapshot in the hourly snapshot schedule.
271HEREDOC
272type = string
273default = "days"
274}
275
276variable "policy_item_hourly_retention_value" {
277description = <<HEREDOC
278Optional - Number of units of time that Atlas retains each snapshot in the hourly snapshot schedule.
279HEREDOC
280type = number
281default = 3
282}
283
284variable "policy_item_daily_frequency_interval" {
285description = <<HEREDOC
286Optional - Interval, in days, between snapshots that Atlas takes of the cluster.
287HEREDOC
288type = number
289default = 1
290}
291
292variable "policy_item_daily_retention_unit" {
293description = <<HEREDOC
294Optional - Unit of time that Atlas retains each snapshot in the daily snapshot schedule.
295HEREDOC
296type = string
297default = "days"
298}
299
300variable "policy_item_daily_retention_value" {
301description = <<HEREDOC
302Optional - Number of units of time that Atlas retains each snapshot in the daily snapshot schedule.
303HEREDOC
304type = number
305default = 7
306}
307
308variable "policy_item_weekly_frequency_interval" {
309description = <<HEREDOC
310Optional - Interval, in weeks, between snapshots that Atlas takes of the cluster.
311HEREDOC
312type = number
313default = 1
314}
315
316variable "policy_item_weekly_retention_unit" {
317description = <<HEREDOC
318Optional - Unit of time that Atlas retains each snapshot in the weekly snapshot schedule.
319HEREDOC
320type = string
321default = "weeks"
322}
323
324variable "policy_item_weekly_retention_value" {
325description = <<HEREDOC
326Optional - Number of units of time that Atlas retains each snapshot in the weekly snapshot schedule.
327HEREDOC
328type = number
329default = 4
330}
331
332
333variable "policy_item_monthly_frequency_interval" {
334description = <<HEREDOC
335Optional - Interval, in months, between snapshots that Atlas takes of the cluster.
336HEREDOC
337type = number
338default = 1
339}
340
341variable "policy_item_monthly_retention_unit" {
342description = <<HEREDOC
343Optional - Unit of time that Atlas retains each snapshot in the monthly snapshot schedule.
344HEREDOC
345type = string
346default = "months"
347}
348
349
350variable "policy_item_monthly_retention_value" {
351description = <<HEREDOC
352Optional - Number of units of time that Atlas retains each snapshot in the monthly snapshot schedule.
353HEREDOC
354type = number
355default = 12
356}
357
358# ------------------------------------------------------------------------------
359# MONGODB TAGS
360# ------------------------------------------------------------------------------
361variable "application" {
362description = <<HEREDOC
363Optional - Key-value pairs that tag and categorize the cluster for billing and organizational purposes.
364HEREDOC
365type = string
366}
367
368variable "environment" {
369description = <<HEREDOC
370Optional - Key-value pairs that tag and categorize the cluster for billing and organizational purposes.
371HEREDOC
372type = string
373}
374
375# ------------------------------------------------------------------------------
376# MONGODB DATA
377# ------------------------------------------------------------------------------
378variable "project_name" {
379description = <<HEREDOC
380Required - The name of the Atlas project in which to create the cluster.
381HEREDOC
382type = string
383}
We configured a file called locals.tf specifically to define two exemplary tags, aiming to identify the name of our application and the environment in which it operates. If you prefer, it is possible to adopt an external tag module, similar to those used in AWS, and integrate it into this configuration.
1locals {
2  tags = {
3    name = var.application
4    environment = var.environment
5  }
6}
In this article, we embrace the use of data sources in Terraform to establish a dynamic connection with existing resources, such as our MongoDB Atlas project. Specifically, in the data.tf file, we define a mongodbatlas_project data source to access information about an existing project based on its name:
1data "mongodbatlas_project" "default" {
2name = var.project_name
3}
Here, var.project_name refers to the name of the project we want to query, an approach that allows us to keep our configuration flexible and reusable. The value of this variable can be provided in several ways, significantly expanding the possibilities for using our infrastructure as code.
The terraform.tfvars file is used to define variable values that will be applied in the Terraform configuration, making infrastructure as code more dynamic and adaptable to the specific needs of each project or environment. In your case, the terraform.tfvars file contains essential values for creating a cluster in MongoDB Atlas, including details such as the project name, cluster characteristics, and auto-scaling settings. See below how these definitions apply:
1project_name = "project-test"
2name = "cluster-demo"
3cluster_type = "REPLICASET"
4application = "teste-cluster"
5environment = "dev"
6
7region_configs = [{
8 provider_name = "AWS"
9 region_name = "US_EAST_1"
10 priority = 7
11
12 electable_specs = {
13 instance_size = "M10"
14 node_count = 3
15 disk_iops = 120
16 disk_size_gb = 10
17 ebs_volume_type = "STANDARD"
18 }
19
20 auto_scaling = {
21 disk_gb_enabled = true
22 compute_enabled = true
23 compute_scale_down_enabled = true
24 compute_min_instance_size = "M10"
25 compute_max_instance_size = "M30"
26 }
27}]
These values defined in terraform.tfvars are used by Terraform to populate corresponding variables in your configuration. For example, if you have a module or feature that creates a cluster in MongoDB Atlas, you can reference these variables directly to configure properties such as the project name, cluster settings, and regional specifications. This allows significant flexibility in customizing your infrastructure based on different environments or project requirements.
The file structure was as follows:
  • main.tf: In this file, we will define the main resource, the mongodbatlas_advanced_cluster, and mongodbatlas_cloud_backup_schedule. Here, you have configured the cluster and backup routines.
  • provider.tf: This file is where we define the provider we are using — in our case, mongodbatlas. We will specify using environment variables, as mentioned previously.
  • terraform.tfvars: This file contains the variables that will be used in our cluster. For example, the cluster name, cluster information, version, size, among others.
  • variable.tf: Here, we define the variables mentioned in the terraform.tfvars file, specifying the type and optionally a default value.
  • version.tf: This file is used to specify the version of Terraform and the providers we are using.
  • data.tf: Here, we specify a data source that will bring us information about our created project. We will search for its name and for our module, it will give us the project ID.
  • locals.tf: We specify example tags to use in our cluster.
Now is the time to apply. =D
We run a terraform init in the terminal in the folder where the files are located so that it downloads the providers, modules, etc…
Note: Remember to export the environment variables with the public and private keys.
1export MONGODB_ATLAS_PUBLIC_KEY="public"                    
2export MONGODB_ATLAS_PRIVATE_KEY="private"
Now, we run terraform init.
1(base) samuelmolling@Samuels-MacBook-Pro cluster % terraform init
2
3Initializing the backend...
4
5Initializing provider plugins...
6- Finding mongodb/mongodbatlas versions matching "1.14.0"...
7- Installing mongodb/mongodbatlas v1.14.0...
8- Installed mongodb/mongodbatlas v1.14.0 (signed by a HashiCorp partner, key ID 2A32ED1F3AD25ABF)
9
10Partner and community providers are signed by their developers.
11If you'd like to know more about provider signing, you can read about it here:
12https://www.terraform.io/docs/cli/plugins/signing.html
13
14Terraform has created a lock file .terraform.lock.hcl to record the provider selections it made above. Include this file in your version control repository so that Terraform can guarantee to make the same selections by default when you run `terraform init` in the future.
15
16Terraform has been successfully initialized!
17
18You may now begin working with Terraform. Try running `terraform plan` to see any changes that are required for your infrastructure. All Terraform commands should now work.
19
20If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.
Now that init has worked, let's run terraform plan and evaluate what will happen:
1(base) samuelmolling@Samuels-MacBook-Pro cluster % terraform plan
2data.mongodbatlas_project.default: Reading...
3data.mongodbatlas_project.default: Read complete after 2s [id=65bfd71a08b61c36ca4d8eaa]
4
5Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
6  + create
7
8Terraform will perform the following actions:
9
10  # mongodbatlas_advanced_cluster.default will be created
11  + resource "mongodbatlas_advanced_cluster" "default" {
12      + advanced_configuration         = [
13          + {
14              + default_read_concern                 = "local"
15              + default_write_concern                = "majority"
16              + fail_index_key_too_long              = false
17              + javascript_enabled                   = true
18              + minimum_enabled_tls_protocol         = "TLS1_2"
19              + no_table_scan                        = false
20              + oplog_size_mb                        = (known after apply)
21              + sample_refresh_interval_bi_connector = 300
22              + sample_size_bi_connector             = 100
23              + transaction_lifetime_limit_seconds   = 60
24            },
25        ]
26      + backup_enabled                 = true
27      + cluster_id                     = (known after apply)
28      + cluster_type                   = "REPLICASET"
29      + connection_strings             = (known after apply)
30      + create_date                    = (known after apply)
31      + disk_size_gb                   = 10
32      + encryption_at_rest_provider    = (known after apply)
33      + id                             = (known after apply)
34      + mongo_db_major_version         = "7.0"
35      + mongo_db_version               = (known after apply)
36      + name                           = "cluster-demo"
37      + paused                         = (known after apply)
38      + pit_enabled                    = true
39      + project_id                     = "65bfd71a08b61c36ca4d8eaa"
40      + root_cert_type                 = (known after apply)
41      + state_name                     = (known after apply)
42      + termination_protection_enabled = (known after apply)
43      + version_release_system         = (known after apply)
44
45      + bi_connector_config {
46          + enabled         = false
47          + read_preference = "secondary"
48        }
49
50      + replication_specs {
51          + container_id = (known after apply)
52          + id           = (known after apply)
53          + num_shards   = 1
54          + zone_name    = "ZoneName managed by Terraform"
55
56          + region_configs {
57              + priority      = 7
58              + provider_name = "AWS"
59              + region_name   = "US_EAST_1"
60
61              + analytics_auto_scaling {
62                  + compute_enabled            = (known after apply)
63                  + compute_max_instance_size  = (known after apply)
64                  + compute_min_instance_size  = (known after apply)
65                  + compute_scale_down_enabled = (known after apply)
66                  + disk_gb_enabled            = (known after apply)
67                }
68
69              + analytics_specs {
70                  + disk_iops       = (known after apply)
71                  + ebs_volume_type = "STANDARD"
72                  + instance_size   = "M10"
73                  + node_count      = 0
74                }
75
76              + auto_scaling {
77                  + compute_enabled            = true
78                  + compute_max_instance_size  = "M30"
79                  + compute_min_instance_size  = "M10"
80                  + compute_scale_down_enabled = true
81                  + disk_gb_enabled            = true
82                }
83
84              + electable_specs {
85                  + disk_iops       = (known after apply)
86                  + ebs_volume_type = "STANDARD"
87                  + instance_size   = "M10"
88                  + node_count      = 3
89                }
90
91              + read_only_specs {
92                  + disk_iops       = (known after apply)
93                  + ebs_volume_type = "STANDARD"
94                  + instance_size   = "M10"
95                  + node_count      = 0
96                }
97            }
98        }
99
100      + tags {
101          + key   = "environment"
102          + value = "dev"
103        }
104      + tags {
105          + key   = "name"
106          + value = "teste-cluster"
107        }
108    }
109    
110  # mongodbatlas_cloud_backup_schedule.default will be created
111  + resource "mongodbatlas_cloud_backup_schedule" "default" {
112      + auto_export_enabled                      = (known after apply)
113      + cluster_id                               = (known after apply)
114      + cluster_name                             = "cluster-demo"
115      + id                                       = (known after apply)
116      + id_policy                                = (known after apply)
117      + next_snapshot                            = (known after apply)
118      + project_id                               = "65bfd71a08b61c36ca4d8eaa"
119      + reference_hour_of_day                    = 3
120      + reference_minute_of_hour                 = 30
121      + restore_window_days                      = 3
122      + update_snapshots                         = false
123      + use_org_and_group_names_in_export_prefix = (known after apply)
124
125      + policy_item_daily {
126          + frequency_interval = 1
127          + frequency_type     = (known after apply)
128          + id                 = (known after apply)
129          + retention_unit     = "days"
130          + retention_value    = 7
131        }
132
133      + policy_item_hourly {
134          + frequency_interval = 12
135          + frequency_type     = (known after apply)
136          + id                 = (known after apply)
137          + retention_unit     = "days"
138          + retention_value    = 3
139        }
140
141      + policy_item_monthly {
142          + frequency_interval = 1
143          + frequency_type     = (known after apply)
144          + id                 = (known after apply)
145          + retention_unit     = "months"
146          + retention_value    = 12
147        }
148
149      + policy_item_weekly {
150          + frequency_interval = 1
151          + frequency_type     = (known after apply)
152          + id                 = (known after apply)
153          + retention_unit     = "weeks"
154          + retention_value    = 4
155        }
156    }
157
158Plan: 2 to add, 0 to change, 0 to destroy.
159
160─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
161
162Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run `terraform apply` now.
Show! It was exactly the output we expected to see, the creation of a cluster resource with the backup policies. Let's apply this!
When running the terraform apply command, you will be prompted for approval with yes or no. Type yes.
1(base) samuelmolling@Samuels-MacBook-Pro cluster % terraform apply 
2
3data.mongodbatlas_project.default: Reading...
4
5data.mongodbatlas_project.default: Read complete after 2s [id=65bfd71a08b61c36ca4d8eaa]
6
7Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
8  + create
9
10Terraform will perform the following actions:
11
12  # mongodbatlas_advanced_cluster.default will be created
13  + resource "mongodbatlas_advanced_cluster" "default" {
14      + advanced_configuration         = [
15          + {
16              + default_read_concern                 = "local"
17              + default_write_concern                = "majority"
18              + fail_index_key_too_long              = false
19              + javascript_enabled                   = true
20              + minimum_enabled_tls_protocol         = "TLS1_2"
21              + no_table_scan                        = false
22              + oplog_size_mb                        = (known after apply)
23              + sample_refresh_interval_bi_connector = 300
24              + sample_size_bi_connector             = 100
25              + transaction_lifetime_limit_seconds   = 60
26            },
27        ]
28      + backup_enabled                 = true
29      + cluster_id                     = (known after apply)
30      + cluster_type                   = "REPLICASET"
31      + connection_strings             = (known after apply)
32      + create_date                    = (known after apply)
33      + disk_size_gb                   = 10
34      + encryption_at_rest_provider    = (known after apply)
35      + id                             = (known after apply)
36      + mongo_db_major_version         = "7.0"
37      + mongo_db_version               = (known after apply)
38      + name                           = "cluster-demo"
39      + paused                         = (known after apply)
40      + pit_enabled                    = true
41      + project_id                     = "65bfd71a08b61c36ca4d8eaa"
42      + root_cert_type                 = (known after apply)
43      + state_name                     = (known after apply)
44      + termination_protection_enabled = (known after apply)
45      + version_release_system         = (known after apply)
46      + bi_connector_config {
47          + enabled         = false
48          + read_preference = "secondary"
49        }
50
51      + replication_specs {
52          + container_id = (known after apply)
53          + id           = (known after apply)
54          + num_shards   = 1
55          + zone_name    = "ZoneName managed by Terraform"
56
57          + region_configs {
58              + priority      = 7
59              + provider_name = "AWS"
60              + region_name   = "US_EAST_1"
61
62              + analytics_auto_scaling {
63                  + compute_enabled            = (known after apply)
64                  + compute_max_instance_size  = (known after apply)
65                  + compute_min_instance_size  = (known after apply)
66                  + compute_scale_down_enabled = (known after apply)
67                  + disk_gb_enabled            = (known after apply)
68                }
69                
70              + analytics_specs {
71                  + disk_iops       = (known after apply)
72                  + ebs_volume_type = "STANDARD"
73                  + instance_size   = "M10"
74                  + node_count      = 0
75                }
76
77              + auto_scaling {
78                  + compute_enabled            = true
79                  + compute_max_instance_size  = "M30"
80                  + compute_min_instance_size  = "M10"
81                  + compute_scale_down_enabled = true
82                  + disk_gb_enabled            = true
83                }
84                
85              + electable_specs {
86                  + disk_iops       = (known after apply)
87                  + ebs_volume_type = "STANDARD"
88                  + instance_size   = "M10"
89                  + node_count      = 3
90                }
91
92              + read_only_specs {
93                  + disk_iops       = (known after apply)
94                  + ebs_volume_type = "STANDARD"
95                  + instance_size   = "M10"
96                  + node_count      = 0
97                }
98            }
99        }
100
101      + tags {
102          + key   = "environment"
103          + value = "dev"
104        }
105      + tags {
106          + key   = "name"
107          + value = "teste-cluster"
108        }
109    }
110
111  # mongodbatlas_cloud_backup_schedule.default will be created
112  + resource "mongodbatlas_cloud_backup_schedule" "default" {
113      + auto_export_enabled                      = (known after apply)
114      + cluster_id                               = (known after apply)
115      + cluster_name                             = "cluster-demo"
116      + id                                       = (known after apply)
117      + id_policy                                = (known after apply)
118      + next_snapshot                            = (known after apply)
119      + project_id                               = "65bfd71a08b61c36ca4d8eaa"
120      + reference_hour_of_day                    = 3
121      + reference_minute_of_hour                 = 30
122      + restore_window_days                      = 3
123      + update_snapshots                         = false
124      + use_org_and_group_names_in_export_prefix = (known after apply)
125      + policy_item_daily {
126          + frequency_interval = 1
127          + frequency_type     = (known after apply)
128          + id                 = (known after apply)
129          + retention_unit     = "days"
130          + retention_value    = 7
131        }
132
133      + policy_item_hourly {
134          + frequency_interval = 12
135          + frequency_type     = (known after apply)
136          + id                 = (known after apply)
137          + retention_unit     = "days"
138          + retention_value    = 3
139        }
140
141      + policy_item_monthly {
142          + frequency_interval = 1
143          + frequency_type     = (known after apply)
144          + id                 = (known after apply)
145          + retention_unit     = "months"
146          + retention_value    = 12
147        }
148
149      + policy_item_weekly {
150          + frequency_interval = 1
151          + frequency_type     = (known after apply)
152          + id                 = (known after apply)
153          + retention_unit     = "weeks"
154          + retention_value    = 4
155        }
156    }
157
158Plan: 2 to add, 0 to change, 0 to destroy.
159
160Do you want to perform these actions?
161  Terraform will perform the actions described above.
162  Only 'yes' will be accepted to approve.
163
164  Enter a value: yes 
165
166mongodbatlas_advanced_cluster.default: Creating...
167mongodbatlas_advanced_cluster.default: Still creating... [10s elapsed]
168mongodbatlas_advanced_cluster.default: Still creating... [8m40s elapsed]
169mongodbatlas_advanced_cluster.default: Creation complete after 8m46s [id=Y2x1c3Rlcl9pZA==:NjViZmRmYzczMTBiN2Y2ZDFhYmIxMmQ0-Y2x1c3Rlcl9uYW1l:Y2x1c3Rlci1kZW1v-cHJvamVjdF9pZA==:NjViZmQ3MWEwOGI2MWMzNmNhNGQ4ZWFh]
170mongodbatlas_cloud_backup_schedule.default: Creating...
171mongodbatlas_cloud_backup_schedule.default: Creation complete after 2s [id=Y2x1c3Rlcl9uYW1l:Y2x1c3Rlci1kZW1v-cHJvamVjdF9pZA==:NjViZmQ3MWEwOGI2MWMzNmNhNGQ4ZWFh]
172
173Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
This process took eight minutes and 40 seconds to execute. I shortened the log output, but don't worry if this step takes time.
Now, let’s look in Atlas to see if the cluster was created successfully…
Atlas Cluster overview Atlas cluster Backup information screen
We were able to create our first replica set with a standard backup policy with PITR and scheduled snapshots.
In this tutorial, we saw how to create the first cluster in our project created in the last article. We created a module that also includes a backup policy. In an upcoming article, we will look at how to create an API key and user using Terraform and Atlas.
To learn more about MongoDB and various tools, I invite you to visit the Developer Center to read the other articles.
Top Comments in Forums
There are no comments on this article yet.
Start the Conversation

Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
Related
Tutorial

How to Migrate PostgreSQL to MongoDB With Confluent Kafka


Aug 30, 2024 | 10 min read
Article

Capturing and Storing Real-World Optics With MongoDB Atlas, OpenAI GPT-4o, and PyMongo


Sep 04, 2024 | 7 min read
Article

Using SuperDuperDB to Accelerate AI Development on MongoDB Atlas Vector Search


Sep 18, 2024 | 6 min read
Tutorial

Part #2: Create Your Model Endpoint With Amazon SageMaker, AWS Lambda, and AWS API Gateway


Sep 18, 2024 | 7 min read
Table of Contents
  • Creating a cluster