Fast and reliable way to export a collection with approximately ten million documents?

No sharded clusters. Check.

Sharding is needed when the total “size” does matter. 768 floating point numbers and other fields do not seem to make more than 10KB for each document. But you are saying “ten million documents” which would correspond to a 100GB size. So the decision is a rough one here for a sharding.

you said you try exporting 100.000 at a time this would correspond to about 1GB which is not much until you said trying 64 of them concurrently. Things go a little blurry here. M40 should deal with these sizes and maybe also the load from 64 operations as well.

But in return, you need bandwidth to get responses from each of those operations before timeouts start to happen. depending on the speed (plus other settings) anything can happen.

It would help if you give a full line of what is that error with “256” code. I could not find a direct reference and traversing source code is not easy with just that number, because there are lots of other keywords having 256 in them; especially “SCRAM-SHA-256” takes a lot of space.

assuming your data includes timestamps, converting to time series is still a valid suggestion for your documents for future purposes.

also, I would like to hear the result if you try smaller numbers when fetching data.