Running a query on a huge collection, brings CPU usage to 100%

Thanks for responding.

Query: This query is used to fetch a distinct count of field_name (can’t give you due to confidentiality issue).
collection_name → huge collection with more than 99M records (documents)

db.getCollection("collection_name").aggregate(
    [
        {
            "$group" : {
                "_id" : "$field_name",
                "count" : {
                    "$sum" : NumberInt(1)
                }
            }
        }
    ], 
);

Hope this information helps you. @Kobe_W @Samuel_84194

There are some solutions that I received while surfing.

  • Use $limit & $skip to optimize the query
  • Apply Indexing to the collection

But the collection on which I am trying to run this query already has proper indexing & also I think it is not feasible to use the skip & limit command because it won’t give me a proper count as there are chances of having some values of field_name beyond the limit.