Capped Collections for logging solution

I am reviewing how to store the LOG of service apps to MongoDB. I have found “Capped Collections” and “TTL”, and eventually I thought “Capped Collections” was suitable.
But there is a problem.
“Capped Collections” overwrites when the specified size is exceeded. But we can not, and we can not predict the size.
If reach the specified size, do not have a rollover function?
If not, In my case, should I find a solution that is not MongoDB?

Hi @111946,

Can you explain your logging use case a bit further?

Capped collections have more specialised uses and are limited (aka “capped”) based on a maximum collection size (including optional max number of documents). You might use capped collections for logging if you want to limit the amount of space consumed by log entries, but as you noted the oldest entries will automatically be removed once the capped storage criteria is exceeded.

Time-to-Live (TTL) indexes expire documents after a specified number of seconds or based on a per-document expiry time. TTL indexes could be used for logging where you want to retain data within a certain time window. For example, only keep the last 7 days worth of log documents.

If reach the specified size, do not have a rollover function?

Deletes from capped collections (and via TTL indexes) are automatic.

If you want to have some sort of rollover/archival logic for documents prior to removal you could implement your own deletion logic (for example, running a scheduled job to remove and archive older log documents).

MongoDB Atlas has an Online Archive feature for M10+ clusters that archives documents to cloud object storage for querying via Atlas Data Lake. Even if you are not an Atlas user, you may find the description of How Atlas Archives Data helpful for designing your own approach.

Regards,
Stennie

1 Like