Spike in read latencies on high throughput 15k qps

we have mongodb 4.2, 1 master and 2 slave replicaset. And application reads from secondaries.

when i reach high read qps (15k read qps while using 2 secondaries and 9k when using 1 secondary) with small number of writes (~500 qps) the read latencies shoot up.

Here are some other details

  1. CPU utilisation 75% idle
  2. load number 0.4 par cpu
  3. spike in cpu usage, interrupts, free memory and created process when the failure occurs.
  4. no major drop in concurrency ticket availability

i have uploaded rest related metric here

The reads are not random hence i don’t think it is a memory pressure issue (read iops are also not much supplement this argument).

Primarily it seems like mongo application limitation but i am not sure how it can be concluded.

can someone give pointers about how it can be debugged next?

  1. when does mongodb create new processes?
  2. what explains high number of interrupts and context swtiches?
  3. why would free memory spike? when box is obviously under high memory pressure.

This is not an intermittant issue for sure, since i was consistently able to reproduce this.

Hi @maneesh

Welcome to our forums, reading your post I think these questions might be better asked in our [https://www.mongodb.com/community/forums/c/ops-admin](https://Ops and Admin category) as they don’t appear to be directly related to the M201 MongoDB Performance course.

If I’m mistaken, can you clarify what chapter and lesson in M201 you are having difficulties with.

Hope this helps,

1 Like

@Eoin_Brazil apologies, I have updated the tag. Thanks for correcting.