Best way to update big datasets

hello
i have some collections with couple million documents. i have some aggregation pipelines i use for small datasets (50-100k documents). when i try to use them with large datasets it takes a lot of time.
am i using aggregation framework for something it is not designed to do?
what can i do to improve performance ?
my aggregations are mostly using $setField to set some field values and some of those $setField steps uses $function operator to run some small javascript code.

Hello @Ali_ihsan_Erdem1,

Could you please share below information, to help us understand the aggregation and data model being used?

  • Please share aggregation.
  • example document
  • example output
  • MongoDB version being used
  • Deployment architecture (hardware/software)
  • Is Indexing done in your DB? If yes, then what are the indexs?

Regards,
Tarun

1 Like

This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.