Explore Developer Center's New Chatbot! MongoDB AI Chatbot can be accessed at the top of your navigation to answer all your MongoDB questions.

MongoDB Developer
Atlas
plus
Sign in to follow topics
MongoDB Developer Centerchevron-right
Developer Topicschevron-right
Productschevron-right
Atlaschevron-right

Streamlining Log Management to Amazon S3 Using Atlas Push-based Log Exports With HashiCorp Terraform

Aastha Mahendru5 min read • Published Jul 08, 2024 • Updated Jul 08, 2024
TerraformAtlas
Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
As the volumes of data managed by organizations grow exponentially, effective log management becomes critical for maintaining the performance and security of database platforms.
Automation and configuration of log management through infrastructure as code (IaC) tools can enable users to automatically push mongod, mongos, and audit logs directly to their S3 buckets instead of manually downloading zipped log files or creating any custom solutions to continuously pull logs from Atlas.
To enhance developer agility, we’re introducing a new capability that enables developer teams to push logs to an Amazon S3 bucket, providing a continuous, scalable, cost-effective solution for log storage and analysis.
This guide will walk you through how to set up push-based logging to Amazon S3 in Atlas through our HashiCorp Terraform MongoDB Atlas provider. Let’s get started!

Prerequisites

You can enable the push-based log export feature on a project level in your Atlas organization after you have authorized your Atlas project to access the S3 bucket in your AWS account. This means all clusters in your Atlas project will be automatically configured to push logs to the S3 bucket.
So to start off, let’s give Atlas what it needs to get the appropriate permissions!
Note that if you prefer to manage IAM roles and permissions via UI or other tools, you can accomplish this by following the appropriate steps outlined in Set Up Unified AWS Access, skipping Step #2, and configuring the appropriate roles in your Terraform configuration.*

Configure the Hashicorp Terraform MongoDB Atlas and AWS providers

In this tutorial, we will use Terraform to manage all operations so you don’t have to worry about jumping between your AWS Management Console, your MongoDB Atlas account, and your Terraform configuration files. This requires you to first specify the HashiCorp Terraform MongoDB Atlas provider as well as Hashicorp Terraform AWS provider versions in a versions.tf file as below:
1terraform {
2 required_providers {
3 mongodbatlas = {
4 source = "mongodb/mongodbatlas"
5 version = ">= 1.16.0"
6 }
7 aws = {
8 source = "hashicorp/aws"
9 version = ">= 5.0"
10 }
11 }
12 required_version = ">= 1.0"
13}
Note: Before deploying anything, be sure to store the MongoDB Atlas programmatic API keys you created as part of the prerequisites as environment variables, and additionally, ensure to have configured your credentials for the AWS provider.
We will now create a variables.tf file for declaring Terraform variables and a terraform.tfvars file for defining variable values. These files are typically created within the root directory of your Terraform project.
In variables.tf, we will define the identifier of your Atlas organization, the name of your new Atlas project, and the name of the new S3 bucket that the logs will be pushed to:
1# Atlas Organization ID
2variable "atlas_org_id" {
3 type = string
4 description = "Atlas Organization ID"
5}
6
7# Atlas Project Name
8variable "atlas_project_name" {
9 type = string
10 description = "Atlas Organization ID"
11}
12
13# Unique AWS S3 Bucket Name
14variable "s3_bucket_name" {
15 type = string
16 description = "AWS S3 Bucket Name for logs"
17}
In the terraform.tfvars file, we will configure the values for the above variables:
1atlas_org_id = "<UPDATE WITH YOUR ORG ID>"
2atlas_project_name = "<UPDATE WITH A NAME FOR PROJECT>"
3s3_bucket_name = "<UPDATE WITH A UNIQUE NAME FOR YOUR S3 BUCKET>"
We can now run terraform init in the terminal. This will initialize Terraform and download Terraform MongoDB Atlas and AWS providers.
The screenshot shows that Terraform and the providers have been successfully initialized.

Set up AWS access for MongoDB Atlas

To set up unified AWS access, you must give Organization Owner or Project Owner access to the Atlas project.
From here, set up cloud provider access in MongoDB Atlas. You can do this with the mongodbatlas_cloud_provider_access_setup resource which returns an atlas_aws_account_arn and atlas_assumed_role_external_id.
At this point, your *.tf configuration file should look similar to this:
1resource "mongodbatlas_project" "exampleProject" {
2 name = var.atlas_project_name
3 org_id = var.atlas_org_id
4}
5
6resource "mongodbatlas_cloud_provider_access_setup" "setupOnly" {
7 project_id = mongodbatlas_project.exampleProject.id
8 provider_name = "AWS"
9}
If you are curious to learn more about this authorization mechanism, visit the AWS docs.
Next, create an IAM role with the assume_role_policy in Atlas configured with the Atlas AWS account from the previous step as the principal.
1resource "aws_iam_role" "atlasRole" {
2 name = "atlas-role"
3 max_session_duration = 43200
4
5 assume_role_policy = jsonencode({
6 "Version" : "2012-10-17",
7 "Statement" : [
8 {
9 "Effect" : "Allow",
10 "Principal" : {
11 "AWS" : "${mongodbatlas_cloud_provider_access_setup.setupOnly.aws_config[0].atlas_aws_account_arn}"
12 },
13 "Action" : "sts:AssumeRole",
14 "Condition" : {
15 "StringEquals" : {
16 "sts:ExternalId" : "${mongodbatlas_cloud_provider_access_setup.setupOnly.aws_config[0].atlas_assumed_role_external_id}"
17 }
18 }
19 }
20 ]
21 })
22}
23
24resource "aws_iam_role_policy" "atlasRolePolicy" {
25 name = "atlas-role-policy"
26 role = aws_iam_role.atlasRole.id
27
28 policy = jsonencode({
29 "Version" : "2012-10-17",
30 "Statement" : [
31 {
32 "Effect" : "Allow",
33 "Action" : "s3:*",
34 "Resource" : "*"
35 }
36 ]
37 })
38}
Finally, use mongodbatlas_cloud_provider_access_authorization resource to authorize and configure the new IAM Assumed Role ARN.
Tip: For now, we have only authorized this role for S3 actions but once successful, you can expand the policy and use the role_id value when configuring other Atlas services that use AWS, such as Data Federation and encryption at Rest.
1resource "mongodbatlas_cloud_provider_access_authorization" "authRole" {
2 project_id = mongodbatlas_project.exampleProject.id
3 role_id = mongodbatlas_cloud_provider_access_setup.setupOnly.role_id
4
5 aws {
6 iam_assumed_role_arn = aws_iam_role.atlasRole.arn
7 }
8}

Configure an Amazon S3 bucket

Now that you have given Atlas the authorization to Amazon S3 service, we will create a new S3 bucket (or use an existing bucket) and configure an aws_iam_role_policy resource to allow the previously created atlasRole to perform operations required for pushing the logs to this bucket:
1resource "aws_s3_bucket" "atlasLogBucket" {
2 bucket = var.s3_bucket_name
3 force_destroy = true // required for destroying as Atlas creates a test folder in the bucket when push-based log export is initially created
4}
5
6resource "aws_iam_role_policy" "atlasLogBucketPolicy" {
7 name = "atlas-log-bucket-policy"
8 role = aws_iam_role.atlasRole.id
9
10 policy = jsonencode({
11 "Version" : "2012-10-17",
12 "Statement" : [
13 {
14 "Effect" : "Allow",
15 "Action" : [
16 "s3:ListBucket",
17 "s3:PutObject",
18 "s3:GetObject",
19 "s3:GetBucketLocation"
20 ],
21 "Resource" : [
22 "${aws_s3_bucket.atlasLogBucket.arn}",
23 "${aws_s3_bucket.atlasLogBucket.arn}/*"
24 ]
25 }
26 ]
27 })
28}
Let’s deploy everything so far by running the below commands from the terminal:
1terraform plan
2terraform apply
With that, you have configured your S3 bucket!

Enable push-based log export

Coming back to where it all started, you can now very easily enable push-based log export configuration for your Atlas project. All you need is the name of the S3 bucket and the role_id from the mongodbatlas_cloud_provider_access_authorization resource. Add the below to your *.tf file:
1resource "mongodbatlas_push_based_log_export" "logExport" {
2 project_id = mongodbatlas_project.exampleProject.id
3 bucket_name = aws_s3_bucket.atlasLogBucket.bucket
4 iam_role_id = mongodbatlas_cloud_provider_access_authorization.authRole.role_id
5 prefix_path = "push-based-log-test"
6}
To deploy again, run from the terminal:
1terraform plan
2terraform apply
If your deployment was successful, you should be greeted with “Apply complete!”
Terraform in terminal showcasing deployment
You can verify the configuration by fetching objects in the S3 bucket. You can do this by updating your Terraform configuration to include the aws_s3_objects data source and specify the bucket name and the prefix path like this:
1data "aws_s3_objects" "atlasLogObjects" {
2 bucket = aws_s3_bucket.atlasLogBucket.bucket
3 prefix = mongodbatlas_push_based_log_export.logExport.prefix_path
4}
List the objects in the bucket by creating an outputs.tf file and referencing the keys parameter as below:
1output "log_bucket_keys" {
2 value = data.aws_s3_objects.atlasLogObjects.keys
3}
Run terraform plan followed by terraform apply in the terminal. You should now see an “atlas-test” object created in your S3 bucket in the outputs:
Terraform in terminal showing objects created in Amazon S3 bucket as outputs
Atlas creates this test object to verify that the IAM role configured has write access to this S3 bucket for the purposes of push-based log exporting. It is safe to delete this file once the log export configuration is successful.
Now that you have successfully configured push-based log export to your S3 bucket, as soon as a cluster is deployed in your Atlas project, you should see the logs in your configured S3 bucket. Happy logging!

All done

Congratulations! You now have everything that you need to start pushing your MongoDB Atlas database logs to your Amazon S3 bucket.
The HashiCorp Terraform Atlas provider is open-sourced under the Mozilla Public License v2.0 and we welcome community contributions. To learn more, see our contributing guidelines.
The fastest way to get started is to create a MongoDB Atlas account from the AWS Marketplace. To learn more about the Terraform provider, check out our documentation, tutorials, solution brief, or get started directly.
Go build with MongoDB Atlas and the HashiCorp Terraform Atlas provider today!
Top Comments in Forums
There are no comments on this article yet.
Start the Conversation

Facebook Icontwitter iconlinkedin icon
Rate this tutorial
star-empty
star-empty
star-empty
star-empty
star-empty
Related
Tutorial

How to Choose the Best Embedding Model for Your LLM Application


Nov 07, 2024 | 16 min read
Quickstart

MongoDB Atlas Serverless Instances: Quick Start


Aug 13, 2024 | 4 min read
Quickstart

Building RAG Pipelines With Haystack and MongoDB Atlas


Sep 18, 2024 | 4 min read
Tutorial

Supercharge Your AI Applications: AWS Bedrock, MongoDB, and TypeScript


Oct 10, 2024 | 9 min read
Table of Contents