Capped Collections for logging solution

I am reviewing how to store the LOG of service apps to MongoDB. I have found “Capped Collections” and “TTL”, and eventually I thought “Capped Collections” was suitable.
But there is a problem.
“Capped Collections” overwrites when the specified size is exceeded. But we can not, and we can not predict the size.
If reach the specified size, do not have a rollover function?
If not, In my case, should I find a solution that is not MongoDB?

1 Like

Hi @111946,

Can you explain your logging use case a bit further?

Capped collections have more specialised uses and are limited (aka “capped”) based on a maximum collection size (including optional max number of documents). You might use capped collections for logging if you want to limit the amount of space consumed by log entries, but as you noted the oldest entries will automatically be removed once the capped storage criteria is exceeded.

Time-to-Live (TTL) indexes expire documents after a specified number of seconds or based on a per-document expiry time. TTL indexes could be used for logging where you want to retain data within a certain time window. For example, only keep the last 7 days worth of log documents.

If reach the specified size, do not have a rollover function?

Deletes from capped collections (and via TTL indexes) are automatic.

If you want to have some sort of rollover/archival logic for documents prior to removal you could implement your own deletion logic (for example, running a scheduled job to remove and archive older log documents).

MongoDB Atlas has an Online Archive feature for M10+ clusters that archives documents to cloud object storage for querying via Atlas Data Lake. Even if you are not an Atlas user, you may find the description of How Atlas Archives Data helpful for designing your own approach.

Regards,
Stennie

1 Like

Just to add a point in this thread.

Can we auto-retrive those records that are automatically removed, because of capped collection?

I have NOT done it with automatic deletion of TTL indexes but with change streams you can watch for delete operations. It should be the same.

Can you provide me the reference or example of implementation for this flow?

You may certainly find examples in

1 Like

You have to implement custom rotation log system like splunk, elasticsearch.

1 Like

It means we have to write a trigger that could periodically retreive the documents before it gets deleted.
Okay noted.
Thank You for your help :smiling_face:.

Raman Welcome to the community !!

Let me research about implementing this kind of system. Will do that and if it gets me to the solution will definitely ping you back. Currently, will moving towards the solution suggested by @steevej .
But still thanx for your suggestion :smiling_face:.