Archiving MongoDB documents to AWS Glacier via TimeSeries expireAfterSeconds (event?)

(I’m using Mongo 6 Community on RHEL) Currently I have a process (custom code) that just archives, then deletes data from MongoDB. I’m starting to convert some of our collections to TimeSeries collections and read all about the expireAfterSeconds feature and thought it would be useful (to have the system delete documents versus my process). Is there a way to archive (via function() call?) based upon when documents are about to be removed? I’ve seen references in pay for services (Atlas) to do this, so, I’m guessing there isn’t a way in the Community edition?

Hey @Allan_Chase,

Migrating to a time-series collection solely to utilize the expireAfterSeconds feature doesn’t seem like a very ideal choice to me. However, you can check if the TTL Index suits your use case. Using this, you can create single-field indexes that MongoDB can use to automatically remove documents from a collection after a certain amount of time or at a specific clock time.

I see you are not using MongoDB Atlas, perhaps, you can also refer to the Online Archive Data feature where you can archive your data to a MongoDB-managed read-only Federated Database Instance on a cloud object storage where you can specify how many days you want to store data in the online archive and a time window when you want MongoDB Atlas to run the archiving job. To read more, please refer to the documentation.

In addition to it, you can also query your Online Archive data with SQL. To learn more, please refer to Query with Atlas SQL.

Hope it helps!

Best regards,