How to optimize billion time series data collection

I have a time series collection that contains nearly one billion data.

I simply running:

db.cllection.find({'metaData.eventId': "FB-2348", 'metaData.marketName': "Match Odds", 'metaData.selectionId': 0})

The query cost more than 2mins to finish

                              "$eq":"Match Odds"

I have billion data looks like this:

ts: 2024-04-05T07:50:53.111+00:00
metaData: {
    betType: "for,ahover,18",
    bookie: "pin88",

Is there anything wrong with my query? Or is there anything I can optimize in my collection/schema design/data configuration? Thank you.


You are querying a timeseries collection without a time criterion, this is one of the main problems, this is related to the way mongodb stores data from this type of collection.
In general, any query on a timeseries dataset will always be based on a time criterion combined with others.
If your queries are not directly related to a time period, you should not use timeseries collections or timeseries database.

Best regards,