How fast is it?

I’m going straight to the point.

In a scenario where you have 300 - 500 tables/collections and they need to be linked, my question is, how fast is mongo?

In mysql it doesn’t matter much if there are 300 - 500 tables since it fulfills the role correctly.

If you can’t imagine an app with 300 - 500 tables… I can’t tell you anything.
At most I can tell you that it is a game.

Putting everything embedded in a document, this smells to me like bad practices from the 2000s and I’m not even going to think about how “big” each table or collection is. In addition, the embedding goes slower from what I understand if there is a lot of information.

Obviously there are ways to optimize the query. That’s clear to me.

My question is with the $lookup, how fast could mongodb go?

How much data per table/collection? , for this we are going to put a hypothetical scenario.


Collection A
{ id_: AAA, collection_B_1: id, collection_B_n+1: id, collection_C_1: id, collection_C_n+1: id,…}

an average of 300 - 500 tables/collections with 20 fields each (with obj, strings, integers…).
With an average number of documents, let’s say passing 5 million (very downward).

Although the scenario is feasible, I would like a time estimate for:

  • Each player sends and collects data every 2 seconds.
  • Request outside of demand (i.e. like the previous one, but exhaustive) for example geolocation
  • Make an average of 25 to 50 queries in a time of 5 to 10 seconds.
  • Number of players: 5 Million.
  • Number of documents per collection 5 Million 25 Million.

I have read that Mongodb is faster than mysql. Well, let’s get it going.

This user spoke commented on the problem halfway.

In my case I’m going straight to the point.

I am not going to ask for a % cpu or traffic benchmark since it would depend on the physical machine, internet provider, port configuration…
Just an approximation of time based on that scenario.

Some official metrics from Mongodb house? Some metrics are not even approximate.
The forums are a bit stained on opinion on this and are not reliable. Not even EPIC. I’d rather see some official Mongodb house metrics than something similar I proposed.
To get a little idea of the speed performance.
Note: Although they are not public, they can even hide who those metrics belong to.

For now I can say with a simple example.

  • Mongo api data: 300 records = 200ms
  • Laravel api with the driver: 300 records = 600-700ms
    And I have Laravel locally…

I’ll stick with “Mongo api data” even though it’s more restrictive…

Is there a way to code the functions and endpoints from an IDE? for example Studio 3T? It’s very annoying to have to go to the page… it doesn’t seem like I can apply good practices either…