How should we store a huge and nested dataset for every new event

I am creating a time series database, but. struggling with the design.

I had ~20 new compeitions per day need to insert to Mongodb(so ~5000 a year), and in each competition, it has more than 20 markets(below only show 2 of them), it contains more than 10000 documents(time series data for the in running competition) inside each markets(below only show 3 of them).

There are few attributes of the data:

  1. Each competition has its own few attributes.
  2. Different market has different data structure.
  3. Market types are fixed, but not every competition has all markets.

My use case is using competition_name → market_type so search for the time series data of a specific competition market.

I am thinking to implement:

One cluster to store all data, each database to store one match(proabably use competition_name or some uqiue name to be the database name), each collection represent one of the market of the competition(use market_type to be the collection name), and store the time series data into the collection(one document per one timestamp).

My cornern is the number of database inside the cluster getting larger and larger, will have up to 5000 database after a year, is there any suggestion on doing it?

Here’s my sample dataset for a single competition:

  "competition_name": "competition1",
  "market": {
    "market_type1": [
        "timestamp": 1,
        "line_x": 123,
        "line_y": 234,
        "line_z": 3462
        "timestamp": 1.1,
        "line_x": 1343,
        "line_y": 2134,
        "line_z": 32
        "timestamp": 1.7,
        "line_x": 11122323,
        "line_y": 23124,
        "line_z": 3412362
    "market_type2": [
        "timestamp": 1.4,
        "para1": 1235678,
        "para2": 3123
        "timestamp": 1.8,
        "para1": 1343,
        "para2": 312456
        "timestamp": 1.9,
        "para1": 1123,
        "para2": 312356423
  "competition_startTime": 1242341234,
  "country": "US"