Question - iterating large amount of documents

Hello,

For simplicity I ask about a single node. When Mongo iterates a large portion of documents (to answer some query that requires scanning the collection), and requires that all iterated data will be coherent to the same point in time - Does this cause all reads in the iteration to be done under the same storage engine transaction?

if not, what mechanism in Wiredtiger Mongo uses to ensure that:

  1. older versions of data that are relevant to that point in time won’t be deleted during iteration
  2. newer versions of the data won’t be seen by the iteration

Thanks,

Roey.

I recall by default its snapshot read, like repeatable read in mysql case.

A big read on my guess will use it with a tramsaction.

However its not same as serializable, so look it up on wikipedia.