We noticed a significant performance drop after upgrading our servers from version 6 to 7.
The same query that selects 2 index entries and took 2ms on version 6 would suddenly
take 50 ms on version 7.
The number of scanned/returned documents did not change.
The size of the dataset is as follows:
TOTAL SIZE 306.4MB
AVG. SIZE 784B
TOTAL SIZE 183.0MB
AVG. SIZE 20.3MB
documentStructure.json (1.4 KB)
Execution plan version 6
mongoExecutionPlan_6_0_10.json (77.7 KB)
Execution plan version 7
mongoExecutionPlan_7_0_1.json (114.4 KB)
Thanks for providing those details. I assume these tests / explain outputs were run on the same server that was upgraded but please correct me if I’m wrong here.
I’m going to do some tests on my own version 6.0 and 7.0 environments to see if theres similar behaviour.
It’s possible it may have something to do with the slot based query engine but hard to confirm at this stage.
I did notice a larger amount of document scans within the
allPlansExecution of the version 7 explain output which seems to be adding up to most of the difference between execution times you are seeing but what is the cause of that is yet unknown.
I will see if I can spot anything.
I assume these tests / explain outputs were run on the same server that was upgraded but please correct me if I’m wrong here.
Yes, this is correct. These outputs were run on a smaller test instance, but we were getting the same behavior on a bigger cluster.
Downgrading to 6.0 also restores the query run time. (We kept the setFeatureCompatibilityVersion on 6)
Hi guys, I had the same problem. What most reflected in my metrics were spikes in scanned documents, which directly impacted the application. Some data simply did not load and operations were interrupted by the client due to a timeout.
I just downgraded to 6.0.11 and the problem was completely resolved!
Posting information in this thread so you can follow up and find out if anyone else has had the same type of problem.