Hello there,
I have created a schema like this
{
updated_at:new Date();
students:[
{
section:"B",
name:"anshul"
},
{
section:"A",
name:"shubham"
}
]
}
And there are 1 million documents like this with different names (lowercase only) and there must be two elements in the student array.
my query is like this
Students.find(
{
students:
{
$elemMatch:
{
section:"A",
name:{$regex:"^s"}
}
}
}
).sort({updated:-1}).limit(20)
I have created a compound index i.e “students.section”:1,“updated”:-1,“students.name”:1
When there are multiple results this index works fine means index scan is low around 20-22
but when I search for specific name or name which are less in collection , all index gets scanned around 1 million.
Could you help me to figure out
how can I maximize the performance and reduce the index scan?