I am already successfully using Atlas Online Archive on a few collections. However, I have a different, very large, collection that I would like archived. It does, very infrequently, require updates to old documents. Moving it off to a read-only archive isn’t an option. Ideally I would like to move, say, 1-year-old data to a live cluster that has the same capabilities as the one all the current data lives on. The point is to keep my very large collection smaller by only having 1 year worth of data in it, which would help with quires.
I suppose I can do it myself by creating a new cluster and writing a script that moves data off at certain intervals, but I was wondering if Atlas already has an infrastructure in place to do that where it moves data from live to archive in increments and I get a unified endpoint to connect to both.