If it might help any future reader, placing what I had learned with some experimentation as a response to this thread.
While I was performing the bulk remove, the duration of query execution was sufficient to allow writes to the collection with the same key values, that matched my query of removal. So while the application client was writing, a thread was in process of finding all such documents ad removing them.
(Requires acknowledgment) Mostly since the dataset worked out is in a form of a cursor, getting the next set of documents each time would make all the intermediately written documents a candidate for removal.
(Finding) This is exactly what happened to me when I retried the removal with some fairly good enough test data to aid me with time to perform in between writes. I just lost the complete data set written to mongo matching my filter query util the time the query execution got completed.