How to identify bad targeting queries (Scanned/Returned ratio)

Hi there!
We get Query Targeting alerts from our mongo Atlas cluster, and rightly so, as I could spot one of our query was lacking an index, causing the entire document set to be scanned.

However, looking at the Query Targeting metric, I still see less-than-ideal ratios, and so I would like to identify what queries are problematic. I have enabled diagnostic on some of our dbs, but from other posts on this forum/the doc, what the diagnostic/performance advisor/profiler show will be only slow queries, which is not the same thing as poorly targeting queries.

For instance, the query I got the alert for still executed in under 30ms, so it never appeared on the profiler.

What tools are there to identify queries (or at least the collections) that have a poor targeting performance ?

Hi @Remi_Sormain,

Thank you for your question! You bring up an excellent point. As you mentioned, relying solely on a slowms threshold does not holistically capture inefficient operations, especially queries with poor query targeting performance that may still be completing within the slowms threshold.

We are actively working on a project this quarter that will update how we profiler operations for Atlas Query Profiler/Performance Advisor. In an upcoming Atlas release, we will be updating Query Profiler/Performance Advisor to also show operations where query targeting ratio exceeds 10k.

In the meantime, however, you can manually set your profiling levels to filter on operations that exceed a query targeting value by adding something like this to your profiler settings. Note, however, that you will need to disable the managed slowms so that your filter is not overwritten. Also, please consider any rate limiting that could occur on Query Profiler due to too many logged operations if you set the query targeting filter too low. The Query Profiler currently has a limit of ingesting 86400 slow query log lines per 24 hours per host.

{$gte: [{$divide: [\"$docsExamined\", \"$nreturned\"]}, 10000]}

Thanks!
Frank

1 Like

Thanks a lot @Frank_Sun for your quick reply!

I’m keen to see this profiler update on Atlas. Thanks for the guidance on the profiler filter, this is a real gem ! I struggled a bit to configure the filter via mongosh, is this right?

db.setProfilingLevel( 1, { filter: { $expr: { $gte: [{ $divide: ['$docsExamined', '$nreturned'] }, 500] } } } )

For now I went with the following to be sure I didn’t filter out poorly targeting requests, given that I know none of my queries should result in too many documents scanned:

db.setProfilingLevel(1, { docsExamined: { $gte: 500 } }); 

Hi @Remi_Sormain,

Yes! Your setProfilingLevel command looks great. One thing to note, though, if you run that command it will overwrite the slowms filter so you will not see operations over the slowms threshold. You can also verify your filters are set correctly by running the “db.getProfilingStatus()” command.

Thanks!
Frank

1 Like

This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.