This is actually something you can do quite easily using Atlas Data Lake.
In fact we have a tutorial on how to do it right here: How to Automate Continuous Data Copying from MongoDB to S3
In the tutorial we are writing the data to Parquet files, but in this case you can just use the type “CSV”.
The process will look at bit like this:
- Create a data lake that points a virtual Data Lake Collection at the source collection in your Atlas Cluster
- Create a $out aggregation that matches the appropriate documents and writes them out to the correct location on your bucket
- Put that aggregation into a Atlas Scheduled Trigger that runs once a day
That should be about it.
Feel free to reach out to me directly if you have any questions on this at Benjamin.Flast@mongodb.com