MongoDB slower queries on find( ) if indexes are duplicate

Duplicate entries in collection indexes are necessary to my database because of nature of multi-tenancy use.

l have a collection “products” as:
{ _id: ObjectId, businessID: String (Index), productCode: String (index), productName: String, keyN: valN, .....}

so as we can see, this collection consists of products of all the businesses, each having unique businessID and all their products entered in this collection

the businessID is indexed without any unique flag, which means false by default

the issue i’m stuck is on the performance while doing find( ) queries. since, there are only 24,436 documents in this collection (as of now) with 71 unique businessID.

db.products.countDocuments( )
// 24436
db.products.distinct("businessID").length
// 71
db.products.getIndexes()
[
  { v: 2, key: { _id: 1 }, name: '_id_' },
  {
    v: 2,
    key: { businessID: 1 },
    name: 'businessID_1',
    background: true
  },
  {
    v: 2,
    key: { productCode: 1 },
    name: 'productCode_1',
    background: true
  }
]

but the db.products.find({businessID: “617e557b88c7914a420e3211”}) for example takes about 40ms which is quite fast, but I’m not sure how it will scale if it’ll get let’s say a million docs later on with 1000 distinct businessID; as compared to other find( ) queries with unique indexes, the queries are comparably faster.

are there any options of setting flags like { commonIndex: true } or something like that so that the queries are faster, since the indexes are already set and the common indexes of _id can be counted mapping the businessID, shouldn’t it be faster to get exact those documents with those key indexes?