One way to help performance is to reduce the size of the working set.
In your case, improving the schema will reduce somehow the working set. Your schema is verbose and since field names are stored with the documents, reducing the field names length will help. Rather than having a list of fields all prefixed with property_ you could have a more elegant schema by having a top object name perperty that groups together all your property_ fields like:
This schema reduce your document size by 120 bytes. See some discussions about other advantages you get with this.
As mentioned by Peter, $group is blocking. You may reduce the blocking time by doing some of the $group work later. In you case, you could delay $sum and $first much later by moving these operations in a $lookup. The pipeline could look like:
let match = { /* same as you have */ }
let group = { $group: {
_id: {
property: "$property_id",
client: "$client_id",
agent: "$agent_id",
},
} }
/* The memory taken by $group will be smaller and execute faster */
const lookup_count = { $lookup: {
from: the_same_collection ,
let : _id: '$_id' ,
pipeline: [
{ $match : { /* using $$_id against property_id, client_id */ } } ,
{ $count : /* ... */ }
] ,
as : 'count'
} }
let sort = { $sort : { /* like you have */ } }
/* since sort is done in memory because it is done on a computed value by not having property_community_id in the sorted document, the working set is reduced */
let lookup_property = { /* a simple lookup just to get the property_community_id */ }
Another idea I got is based on the assumption (because you do $first on community_id) that the triplets have a unique community_id. You could start by a $group to get all the distinct community_id. Then you do a lookup to get the original $group triplets.
I remember a discussion, @chris and @Asya_Kamsky were maybe partipating, where this idea of $group then $lookup, to do the $group accumulator. All this to reduce the working set of $group and to reduce the block time.