Handling Arrays with Indexing for Faster Querying

I have an array field called “arr” which is an array of objects and each object has ~10 fields in it. Length of an “arr” on average is ~5
One of the fields in this “arr” object is called “name” and I am only concerned with this name.
I also have a “time” field which is outside the “arr” field as part of the document which is of type Date and stores the timestamp.
I make a simple query like this in my Python code -

collection.find({
  time: {
    $gte: datetime.datetime("2024-07-12T00:00:00.000Z"),
    $lte: datetime.datetime("2024-07-22T00:00:00.000Z")
  }
}, {'arr.name': 1, '_id': 0})

I have an index on time_1, time_-1, time_1_arr.name_1 and time_-1_arr.name_-1
I realise that even when i project only the arr.name field, the MongoDB takes a lot of time to return the data to me.
Looking further into the explain output I realise that the keysExamined = docsExamined = nReturned = 50000 and time taken ~10sec
I realise that this means that MongoDB is doing disk read 50K times to get the arr.name field which is projected as even though the index is created on time_1_arr.name_1, MongoDB cannot store the value directly of the arr.name and has to go to the disk to read it.

Q1. If this is true then how can queries involving arrays in MongoDB can ever become a part of the covered queries?
Q2. Does this mean that whenever querying or projecting fields from array objects, it would always be slower as it will have to do the disk read no matter what the index is?

Try removing the indexes time_1, time_-1 and time_-1_arr.name_-1.

The time_1_arr.name_1 should be sufficient even for queries that only needs time.