Difference between resumeAt and resumeToken in change streams

Hi,

What is the difference b/w the behavior of resumeAt that accepts a timestamp to resume the notifications from vs resumeToken that accepts the resume token?

return ChangeStreamOptions.builder()
                    .filter(Aggregation.newAggregation(Example.class, matchOperationType))
                    .resumeAt(Instant.ofEpochSecond(1675303335)) // this is simply a unix timestamp
                    .resumeToken(tokenDoc) // resume token saved from previous notification
                    .returnFullDocumentOnUpdate().build();

In case the applicatio crashes/restarted would be ideal/simple to simply pass in an unix timestap of a reasonable past time (ranging from few hours to few days) vs building application logic to save the token of every last successfully processed message?

Hello @Darshan_Bangre ,

Method .resumeAt(Object token) is a Spring Framework method. It resumes the change stream at a given point. Below are some related details which I got from this documentation

Parameters:
token - an Instant or BsonTimestamp
Returns:
new instance of ReactiveChangeStreamOperation.TerminatingChangeStream.
Throws:
IllegalArgumentException - if the given beacon is neither Instant nor BsonTimestamp.

As per my understanding, it resumes the stream from the nearest point in the past that is still available on the server.

Whereas, MongoDB has resume token, which processes a change stream from a historical point in time in the oplog. It is a unique identifier that represents a specific point in the change stream. This token is returned by the server as part of each change event, and it can be used to resume the stream at the exact point where it left off. This provides control over the resume point and ensures that no change events are missed.

Change streams in MongoDB are resumable by specifying a resume token to either resumeAfter or startAfter when opening the cursor.

I would recommend you to analyse your requirements and decide on your development approach accordingly.

Note: If you anticipate interrupting the stream processing, ensure the oplog is large enough to hold the unprocessed changes (writes) before you resume the stream.

To learn more about this, please refer

Regards,
Tarun

1 Like

This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.