Thanks, @Prasad_Saya for your valuable reply. I have executed the query with explain().
Here is the execution status result
"executionSuccess" : true,
"nReturned" : 5,
"executionTimeMillis" : 46698,
"totalKeysExamined" : 261722,
"totalDocsExamined" : 261722,
"executionStages" : {
"stage" : "PROJECTION_DEFAULT",
"nReturned" : 5,
"executionTimeMillisEstimate" : 38692,
"works" : 261729,
"advanced" : 5,
"needTime" : 261723,
"needYield" : 0,
"saveState" : 2505,
"restoreState" : 2505,
"isEOF" : 1,
"transformBy" : {
"_id" : true,
"url" : true,
"title" : true
},
"inputStage" : {
"stage" : "SORT",
"nReturned" : 5,
"executionTimeMillisEstimate" : 38682,
"works" : 261729,
"advanced" : 5,
"needTime" : 261723,
"needYield" : 0,
"saveState" : 2505,
"restoreState" : 2505,
"isEOF" : 1,
"sortPattern" : {
"source.value" : -1
},
"memLimit" : 104857600,
"limitAmount" : 5,
"type" : "simple",
"totalDataSizeSorted" : NumberLong("3592156118"),
"usedDisk" : false,
"inputStage" : {
"stage" : "FETCH",
"nReturned" : 261722,
"executionTimeMillisEstimate" : 37991,
"works" : 261723,
"advanced" : 261722,
"needTime" : 0,
"needYield" : 0,
"saveState" : 2505,
"restoreState" : 2505,
"isEOF" : 1,
"docsExamined" : 261722,
"alreadyHasObj" : 0,
"inputStage" : {
"stage" : "IXSCAN",
"nReturned" : 261722,
"executionTimeMillisEstimate" : 220,
"works" : 261723,
"advanced" : 261722,
"needTime" : 0,
"needYield" : 0,
"saveState" : 2505,
"restoreState" : 2505,
"isEOF" : 1,
"keyPattern" : {
"date" : 1
},
"indexName" : "date_1",
"isMultiKey" : false,
"multiKeyPaths" : {
"date" : [ ]
},
"isUnique" : false,
"isSparse" : false,
"isPartial" : false,
"indexVersion" : 2,
"direction" : "forward",
"indexBounds" : {
"date" : [
"[new Date(1627776000000), new Date(1630367999000)]"
]
},
"keysExamined" : 261722,
"seeks" : 1,
"dupsTested" : 0,
"dupsDropped" : 0
}
}
}
}
From these stats, the index stage is working fine. But the FETCH and SORT stage is taking more time. So I am thinking that the working set and index are not fit into RAM. I have cross-checked my server configuration and have only 8GB RAM(I made a mistake in the above post).
My working set is 261,722docs X 13Kb = 3.4GB and my date field index is 110MB and overall collection is 750MB. And we have 2 more collections which are combined with 16GB(index).
MongoDB document says, 50% of (RAM - 1 GB). In this case, (0.5 * (8 -1GB)) = 3.5GB Ram we have. So I suspect this could be a memory problem, If so increasing the RAM will be solving the problem or any other suggestions. Please correct me If I am wrong.