Slow response times on Mongo Realm GraphQL


Me and my team are experiencing slow response times on our Realm app’s GraphQL API (900-1500ms). A few months ago we were experiencing response times of around 300ms, so it’s a significant increase.

I am aware that it is impossible for you to tell what’s causing this increase without knowing the code base. However, we have some general questions regarding what might be the issue.
The weird part for us is that the response times are about the same for every kind of request. Our heavier Custom Resolvers take the same time as the most simple default queries that are directly connected to the Atlas collections. If a certain query would have taken more time, it would have been easy for us to detect a memory leak or something similar in that query’s function, but this does not seem to be the case…

We have upgraded our Atlas Cluster to M20, and have not released the app yet, so it shouldn’t be related to the traffic load.

Our app has grown fairly large, we are using:

  • 46 functions
  • 1 scheduled trigger (every 10min)
  • 15 custom resolvers
  • API key + user/password auth
  • Rules for every collection (44)
  • 100+ unit- and integration tests, that are implemented similarly to your guide: (these tests are not included in the source code of the Realm app though, so we don’t suspect it to be the issue)

Everything is developed through github, nothing is done in the console.

We tried to disable user/password auth and saw a potential decrease in response time (perhaps 100-200ms quicker in average).
We have also tried to remove all functions, custom resolvers, rules and triggers, and we still experienced around 1100ms response times for queries that are directly connected to the Atlas collections (even small collections of like 5 documents).

As you probably can tell, we are getting a bit desperate. Do you have any ideas what might be causing these high response times? Could it be some kind of setting/config that can slow down a whole app? Is our app to large? Any help would be appreciated!

I believe this is not just about you
we had the same experience a few days ago
happened more than once

Sorry to hear that you’re experiencing this too, @Royal_Advice . What do you mean with “once”? After having implemented certain features? In that case, which?

If you mean that some requests spike in response time, we’ve experienced that too. That’s not what I meant in this post however, as we have gotten a consistent increase in average response time.

I would, to an extent, feel reliefed if this was something that could be magically solved when Mongo patches Realm, but I don’t think this is the case. As stated above, the response time drops down to the previous 300ms when we revert to older commits, so it seems there’s something with our code base. We’ve tried to isolate the specific commit, but it doesn’t appear to be connected to a single commit…


We removed all rules (except for one for testing), and the response times dropped from 900-1500ms to 500-700ms. Can anyone explain why this is?

We still have to drop it more though. Is anyone aware of some similar caveats as the rules, that might increasing the response times a couple of hundreds of ms?

(I am unable to edit the original post, but I accidentally wrote that we tried to remome the rules last friday, however this was not the case. We only removed all functions, custom resolvers and triggers at that point.)

Hi Petas - I’m not quite sure what rules/permissions you had on your application but in general, adding permissions can make a request slower since we evaluate permissions on a per-request basis. That being said, there are a couple of things that might be worth looking into:

  • Can you use filters? These tend to reduce compute/request time.
  • Can you run certain custom resolvers as system functions? These will bypass rules and be faster.
  • What cloud provider/region are you using for your Atlas cluster and what is your Realm Deployment model? Using a Realm region that is closest to your Atlas Cluster Region and where the requests are being made is a good practice here.
  • Adding indexes to your collections

Hi @Sumedha_Mehta1, thank you for your response!

I see what you’re saying about the rules/permissions, and it definitely makes sense that they could lower the response time. However, you are required to have rules for each collection you want to use in Realm, so it’s not an option for us to remove them. As for the roles of the rules, we haven’t done anything funky at all, so it should be able to run smoothly. The roles-rules look the same for all collections we’re using:

"roles": [
            "name": "default",
            "apply_when": {},
            "read": true,
            "write": false,
            "insert": false,
            "delete": false,
            "search": true,
            "additional_fields": {}

No, we are using every custom resolver as resolvers, none can be ‘converted’ to a system function.

We are based in Stockholm, and have both our Atlas cluster and Realm region in eu-west - Ireland.

We have indexed our collections in a suitable way. However, we don’t see how that or filters could fix the issue, seeing how the response times are about the same for simple queries on collections with 3 documents, as for more complex queries on collections with thousands of documents… This fact makes it feel like it’s something closer to the geographical issue you’re mentioning. However, as stated in the post, the app has been quick in the past, with the Atlas cluster and Realm app deployed in the same region.

Does it seem like my list of our features and it’s quantities could be too large for Realm to handle? We don’t think it’s that large nor complex though…

We greatly appreciate your help, Sumedha!

1 Like

Hey @petas - could you open a support ticket with MongoDB since this is a bit more nuanced and requires looking into your application more.

Just an update for the people who might stumble across this thread in the future.

I reported the ticket to the Mongo support team in the end of January, as suggested above. They confirmed that this seems to occur on certain apps, and that they suspect that it’s a bug from their end. Since then I’ve asked for updates every other week, with the constant update that they’re “working on it”. A few weeks ago the ticket was simply closed, without any reported resolvement.

We’re very disappointed in the service overall. We were closing up on launch when I first opened this thread, so it was very stressful that this issue randomly appeared right then. Since then we’ve been forced to keep stalling the release, until a point a couple of months ago where it simply couldn’t wait any longer. We had to tweak our frontend a whole lot, and worked hard on concealing the response times as much as possible.

If someone is experiencing this kind of problem early on in their development process we would advice you to just switch to another serverless solution. Over three months have passed and nothing has improved.

Hi @petas - I’m sorry to hear that you didn’t have a great experience with our support/service. For complete transparency, there was a gap in our service implementation around how the GraphQL Schema was being generated for very large JSON schema. This issue was identified earlier this year and is being addressed, though unfortunately it is not a quick bug fix . I’m expecting that this will get resolved ~ end of May.

I understand that is not ideal based on your launch, but please let me know if you have any further feedback to pass on. I will also update this thread when we release the fix. You can email me at



Thank you for your thorough response Sumedha! If the support would have offered such transparency (and potential ETA) we would have been able to handle the situation in a different, less stressful, way.

However, I think your response(s) on this forum always have been on point, so kudos to you! I have forwarded this information to our stakeholders, and we’re looking forward to an update on the subject by the end of May!

1 Like


The Mongo team has just fixed the issue! Our response times are finally back at 150-300 ms again. Great relief!

Thank you @Sumedha_Mehta1 for the continuous communication on the process, I’ve appreciated it a lot!

1 Like

This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.