Hi Everyone!
I have 2 collections:
Query Collection
[
{
_id: "66bc7e0cbc5dcf3b76f3f55e",
query: "wrapper"
}
]
and the BigData collection contains large data.
I have a question about comparing the performance of the following two commands:
The first command:
db.BigData.aggregate([
{
$match: {
key1: "condition1"
}
},
{
$limit: 25
},
{
$project: {
...
}
}
])
The second command:
db.Query.aggregate([
{
$limit:1
},
{
$set: {
"success": true
}
},
{
$lookup: {
from: "BigData",
pipeline: [
{
$match: {
key1: "condition1"
}
},
{
$limit: 25
},
{
$project: {
...
}
}
],
as: "data"
}
}
])
Are the performance of these two commands equivalent?
If they are equivalent in performance, is using multiple nested $lookup operations in the second command an optimal solution for join documents? If the performance is different, how should I query to optimize it?
I prefer to use the second command because I want all query commands to return data in a specific structure:
{
success: true,
data: [],
totalCount: "pipeline Count based on a complex condition"
}
The second command returns an array instead of a single object as I need. Can you help me modify the command to return the result as a single object in the most optimal way for large data?
Thanks in advance!