Managing Huge Number of Tabular Data with Different Schemas


We are developing an application where users can create tables in the app and enter rows of data for each table. Since the app can have thousands of users and each user can create tens/hundreds of tables we will have millions of different tables. We have 2 different methods in mind to model this data but both of them have drawbacks.

Option 1 is to create a single collection and hold each row of each user generated table as a document in that collection. In order to optimize the CRUD performance on these tables, we want to create indexes. However, since tables are defined by users, each table has different fields. So we would need different indexes for different parts of the collection. We would need an index for each user generated table and the app can have millions of different tables. I am not sure if it is possible or good to add that much indexes on a single collection.

Option 2 is to create a different collection for each user generated table. In this case we can define indexes separately for each collection. However, according to this article it is not recommended to have more than 10,000 collections.

How would you model this database of this application?