Best way to update big datasets

hello
i have some collections with couple million documents. i have some aggregation pipelines i use for small datasets (50-100k documents). when i try to use them with large datasets it takes a lot of time.
am i using aggregation framework for something it is not designed to do?
what can i do to improve performance ?
my aggregations are mostly using $setField to set some field values and some of those $setField steps uses $function operator to run some small javascript code.