Read oriented MongoDB architecture

I am currently developing an app (providing some content to a large number of users) that is predominately oriented toward reading data from MongoDB, that has one primary node, and two secondary nodes (probably more secondaries in production) and no sharding implemented so far.

App flow in short:

  • Service is fetching messages from broker, message is processed, and primary node is updated accordingly. After that, changes are replicated to the rest of replica set.
  • Main requirement to achieve for app is that it has to be scalable and read performant (~3000 reads/sec), and to enable that, initial idea is to have multiple APIs in front of which is Load balancer. Also, one Mongo replica (secondary) will be located very close to every API, and I will point API to read against that replica (using tags probably).
  • Dataset that we will be working is quite small (~2GB max), and because of that sharding seems like overkill for now.
  • Because of Load balancer, and the fact that user sessions are not sticky we have a consequence that for single user session, multiple requests can hit different API endpoints, and read against different Mongo replicas, so one query can return different data (because of replication lag).
  • I would like to point out that I would like to have identical dataset across entire replica set!
  • Since writes are not so frequent (~50 writes/sec) I was thinking of adjusting Mongo write performance to match replica set count (so W:3 in development), set wtimeout to ~15sec and to set journal: true.

I would really appreciate your opinion on my “strategy”.
Every advice is welcome :slight_smile:

Regards,
Marko

Hi @Marko_Saravanja,

A few comments based on what I see here.

  1. I guess the load balancer would be for your API, served by your back-end, and not between the back-end and the MongoDB nodes. If so then it’s fine. But don’t put a load balancer in front of MongoDB. I don’t think it’s a good idea.
  2. Reading using tags would work but it’s annoying because this would require a different setup/config for each API node, plus this would also mean that a particular API node would become unresponsive if the associated MongoDB node goes down for some reason (planned or not). It would be smarter to use the ReadPreference nearest which would always go to the closest available MongoDB node. This would allow you to fall back on the second-closest MongoDB node in case something happens.
  3. When you write to MongoDB, you always write on the Primary which then updates the secondaries asynchronously. This means that there is no way to guarantee that you will read the exact same data on a secondary or another at the same exact time T.
  4. To ensure the consistency of the data you are reading, you can use one of the readConcern available. I would go with "majority" if you want the best possible level of consistency and isolation… But there is a trade off of course.
  5. To be honest, 3000 reads / sec on a 2GB dataset is not hard to achieve on a single Primary. I hope you will have 4GB of RAM at least on your MongoDB nodes so the entire data set fits in RAM and create the optimal indexes to ensure that everything is as efficient as it can be.
  6. Journal isn’t optional anymore with WiredTiger and even if it was, you do NOT want to NOT use it in production if your data is valuable.
  7. Coming back on the RS tags VS nearest. You don’t have to decide today. Try the easiest implementation: nearest. It’s still easy to change this later.
  8. A great data model, carefully engineered, is the key to get great performances. A great data model will allow fast reads in your case and great indexes. As you have very little writes, they can be a bit slower but ideally, they would be easy to perform too.

I hope this helps a bit :sweat_smile:.

Cheers,
Maxime.

2 Likes

Hi @MaBeuLux88.
Thank you for your reply, it has provided some guidelines.

  1. Yes, it is Load balancer for my API-s, not Mongo instances.
  2. 3000 reads/sec is at the moment, management could easily increase it to 10k.

However i noticed a strange situation. My connection string has following part:

…readPreference=secondary&readPreferenceTags=node:node1&readPreferenceTags=dc:myDataCenter…

meaning I want to read data from secondary node that is tagged as node1. If this node is unavailable read data from first node tagged as myDataCenter.
During load tests, queries are indeed run against node1, but, now comes strange part, queries do not use indexes.

Any idea why?

Sorry, wrong info, I was monitoring Compass, tab indexes on my collection, and noticed that index counters are not increasing during load tests.
Interesting thing is that when reading from secondaries, indexes are used, but counters remain the same.
Is that default behaviour?

Can you please share your query & the indexes on your collection?

You should check manually what is happening by running manually a query with .explain(true) to check what the winning plan is actually doing.

Your indexes (and data) are suppose to be exactly the same on all the nodes (modulo the sync latency) so “where” you read shouldn’t change the fact that you are using an index or not. The winning plan in the explain plan should always be the same.

This index problem is the #1 problem to solve. A collection scan is a MUCH bigger problem than the 30ms latency difference between reading on node 1 or 2. If you are running on MongoDB Atlas, the Performance Advisor should tell you about missing and recommended indexes.

Against, readPreference=nearest is doing what you want without the tag options.

Cheers,
Maxime.