Slow Query with IXSCAN

Hi,
I’m using the latest mongodb docker container for development and testing.
While developing I found these log entries (log level 5, profiling level 1):

{"t":{"$date":"2023-03-15T08:44:11.227+00:00"},"s":"D2", "c":"COMMAND",  "id":21965,   "ctx":"conn68","msg":"About to run the command","attr":{"db":"inPoint_WF","client":"172.17.0.1:56210","commandArgs":{"find":"wfc.scheduled_commands","filter":{"ExecuteTime":{"$lt":638144666512274247}},"$db":"inPoint_WF","lsid":{"id":{"$uuid":"112dfa69-185a-45b5-9c76-923ec402ad7e"}}}}}
{"t":{"$date":"2023-03-15T08:44:11.227+00:00"},"s":"D3", "c":"STORAGE",  "id":22414,   "ctx":"conn68","msg":"WT begin_transaction","attr":{"snapshotId":131960,"readSource":"kNoTimestamp"}}
{"t":{"$date":"2023-03-15T08:44:11.227+00:00"},"s":"D5", "c":"QUERY",    "id":20967,   "ctx":"conn68","msg":"Beginning planning","attr":{"options":"INDEX_INTERSECTION ","query":"ns=inPoint_WF.wfc.scheduled_commandsTree: ExecuteTime $lt 638144666512274247\nSort: {}\nProj: {}\n"}}
{"t":{"$date":"2023-03-15T08:44:11.228+00:00"},"s":"D5", "c":"QUERY",    "id":20968,   "ctx":"conn68","msg":"Index number and details","attr":{"indexNumber":0,"index":"kp: { _id: 1 } unique name: '(_id_, )' io: { v: 2, key: { _id: 1 }, name: \"_id_\" }"}}
{"t":{"$date":"2023-03-15T08:44:11.228+00:00"},"s":"D5", "c":"QUERY",    "id":20968,   "ctx":"conn68","msg":"Index number and details","attr":{"indexNumber":1,"index":"kp: { ExecuteTime: -1 } name: '(idx_exectime, )' io: { v: 2, key: { ExecuteTime: -1 }, name: \"idx_exectime\", background: true }"}}
{"t":{"$date":"2023-03-15T08:44:11.228+00:00"},"s":"D5", "c":"QUERY",    "id":20968,   "ctx":"conn68","msg":"Index number and details","attr":{"indexNumber":2,"index":"kp: { CommandName: 1, Data: 1 } unique name: '(idx_key, )' io: { v: 2, unique: true, key: { CommandName: 1, Data: 1 }, name: \"idx_key\", background: true }"}}
{"t":{"$date":"2023-03-15T08:44:11.228+00:00"},"s":"D5", "c":"QUERY",    "id":20970,   "ctx":"conn68","msg":"Predicate over field","attr":{"field":"ExecuteTime"}}
{"t":{"$date":"2023-03-15T08:44:11.228+00:00"},"s":"D2", "c":"QUERY",    "id":20971,   "ctx":"conn68","msg":"Relevant index","attr":{"indexNumber":0,"index":"kp: { ExecuteTime: -1 } name: '(idx_exectime, )' io: { v: 2, key: { ExecuteTime: -1 }, name: \"idx_exectime\", background: true }"}}
{"t":{"$date":"2023-03-15T08:44:11.228+00:00"},"s":"D5", "c":"QUERY",    "id":20972,   "ctx":"conn68","msg":"Rated tree","attr":{"tree":"ExecuteTime $lt 638144666512274247  || First: 0 notFirst: full path: ExecuteTime\n"}}
{"t":{"$date":"2023-03-15T08:44:11.228+00:00"},"s":"D5", "c":"QUERY",    "id":20944,   "ctx":"conn68","msg":"Tagging memoID","attr":{"id":1}}
{"t":{"$date":"2023-03-15T08:44:11.228+00:00"},"s":"D5", "c":"QUERY",    "id":20943,   "ctx":"conn68","msg":"Enumerator: memo just before moving","attr":{"memo":"[Node #1]: AND enumstate counter 0\n\tchoice 0:\n\t\tsubnodes: \n\t\tidx[0]\n\t\t\tpos 0 pred ExecuteTime $lt 638144666512274247\n\n"}}
{"t":{"$date":"2023-03-15T08:44:11.228+00:00"},"s":"D5", "c":"QUERY",    "id":20976,   "ctx":"conn68","msg":"About to build solntree from tagged tree","attr":{"tree":"ExecuteTime $lt 638144666512274247  || Selected Index #0 pos 0 combine 1\n"}}
{"t":{"$date":"2023-03-15T08:44:11.228+00:00"},"s":"D5", "c":"QUERY",    "id":20978,   "ctx":"conn68","msg":"Planner: adding solution","attr":{"solution":"FETCH\n---nodeId = 2\n---fetched = 1\n---sortedByDiskLoc = 0\n---providedSorts = {baseSortPattern: { ExecuteTime: -1 }, ignoredFields: []}\n---Child:\n------IXSCAN\n---------indexName = idx_exectime\n---------keyPattern = { ExecuteTime: -1 }\n---------direction = 1\n---------bounds = field #0['ExecuteTime']: (638144666512274247, -inf.0]\n---------nodeId = 1\n---------fetched = 0\n---------sortedByDiskLoc = 0\n---------providedSorts = {baseSortPattern: { ExecuteTime: -1 }, ignoredFields: []}\n"}}
{"t":{"$date":"2023-03-15T08:44:11.228+00:00"},"s":"D5", "c":"QUERY",    "id":20979,   "ctx":"conn68","msg":"Planner: outputted indexed solutions","attr":{"numSolutions":1}}
{"t":{"$date":"2023-03-15T08:44:11.228+00:00"},"s":"D2", "c":"QUERY",    "id":20926,   "ctx":"conn68","msg":"Only one plan is available","attr":{"query":"ns: inPoint_WF.wfc.scheduled_commands query: { ExecuteTime: { $lt: 638144666512274247 } } sort: {} projection: {}","planSummary":"IXSCAN { ExecuteTime: -1 }"}}
{"t":{"$date":"2023-03-15T08:44:11.228+00:00"},"s":"D3", "c":"STORAGE",  "id":22413,   "ctx":"conn68","msg":"WT rollback_transaction","attr":{"snapshotId":131960}}
{"t":{"$date":"2023-03-15T08:44:11.228+00:00"},"s":"I",  "c":"COMMAND",  "id":51803,   "ctx":"conn68","msg":"Slow query","attr":{"type":"command","ns":"inPoint_WF.wfc.scheduled_commands","command":{"find":"wfc.scheduled_commands","filter":{"ExecuteTime":{"$lt":638144666512274247}},"$db":"inPoint_WF","lsid":{"id":{"$uuid":"112dfa69-185a-45b5-9c76-923ec402ad7e"}}},"planSummary":"IXSCAN { ExecuteTime: -1 }","keysExamined":0,"docsExamined":0,"cursorExhausted":true,"numYields":0,"nreturned":0,"queryHash":"8E53FB34","planCacheKey":"68A88A17","queryFramework":"classic","reslen":122,"locks":{"FeatureCompatibilityVersion":{"acquireCount":{"r":1}},"Global":{"acquireCount":{"r":1}},"Mutex":{"acquireCount":{"r":1}}},"storage":{},"remote":"172.17.0.1:56210","protocol":"op_msg","durationMillis":0}}

It states that there was a “slow query” find on the collection scheduled_commands with the filter {“ExecuteTime”:{“$lt”:638144666512274247}}.

There is a desc index on “ExecuteTime”, and it even states that it uses an IXSCAN with the right index. Also the duration seems to be 0 ms.

Can someone point me in the right direction why this is classified as a “slow query” ?

EDIT: I have the same with and find which only filters for _id:

{"t":{"$date":"2023-03-15T09:36:51.439+00:00"},"s":"I",  "c":"COMMAND",  "id":51803,   "ctx":"conn133","msg":"Slow query","attr":{"type":"command","ns":"inPoint_WF.wfc.workflows","command":{"find":"wfc.workflows","filter":{"_id":"64108391e872af626b278864"},"$db":"inPoint_WF","lsid":{"id":{"$uuid":"5f3a7106-86ff-4d68-a4bb-6caa3c015786"}}},"planSummary":"IDHACK","keysExamined":1,"docsExamined":1,"cursorExhausted":true,"numYields":0,"nreturned":1,"queryHash":"740C02B0","queryFramework":"classic","reslen":67766,"locks":{"FeatureCompatibilityVersion":{"acquireCount":{"r":1}},"Global":{"acquireCount":{"r":1}},"Mutex":{"acquireCount":{"r":1}}},"storage":{},"remote":"172.17.0.1:37932","protocol":"op_msg","durationMillis":3}}

Thanks in advance

Hello @Markus_Schweitzer,

Welcome to the MongoDB Community Forums :sparkles:

As per the documentation, the client operations (such as queries) will appear in the log if their duration exceeds the slow operation threshold or when the log verbosity level is 1 or higher, which is aligning with your case.

This is happening because of the high log verbosity level, so I believe this is working as designed.

I hope it helps!

Best,
Kushagra

1 Like

I believe OP is not asking why there’s a slow query log message, instead. he’s asking why that specific commmand/query is classified as “slow”. (e.g. it’s using index scan and duration shows 0 ms).