WARNING! -- Mongo DB Serverless Pricing

Hi Parikh_Jain

Please see the “Serverless Pricing” section of this post for more information on how the bill is calculated along with this article on helpful tips to optimize workloads on your Serverless instance.

1 Like

I want to point out that there is a bug if you are using MongoDB Compass that can lead to indexes not being used.

If you create an index in MongoDB Compass, it won’t be used by the Node.js driver (and possibly others, though I can’t speak to that personally).

MongoDB Compass will be able to use the index just fine, making you think it’s working, when it’s not. People should be aware this is a possible cause of their bill being high.

I explain more in the post below:

I had the same issue, $122 in 2 days. I had a few megabytes of data on my instance, I won’t ever use this product again. The app was not live, it had 0 users. Just me playing around with my API

1 Like

Hi Yusuf

I am from the Serverless team and am terribly sorry about the experience you’ve had. Based on your screenshot, it seems like there were a lot of unindexed queries being run. I have sent you a direct message to better understand your use case. I would also suggest checking out the links posted in my responses above. Looking forward to corresponding over direct message.

It’s worth mentioning again, after reading the docs my bill is significantly lower, although this is just a learning app for me, so no real customers/data.

2 Likes

Also got hit for $180 in 7 days. We have 0 users besides 2 developers testing our website. We do have a socket API writing to database constantly but the bandwidth for that is minimal. Pretty ridiculous. Definitely feels like a scam.

While I’m not using serverless, I was unable to get a response for why I’m having absurd data transfer values when there are virtually no users using my app. Just a handful for testing, and it is periodic only.

The TOTAL size for the data in my database is 711 KB but somehow in a month I have data transfer values of almost 20 GB and 80 GB? This makes no sense and no one was willing to answer.

Additonally, in Atlas Metrics it says only 84,483.1 B of data has been transferred for the month. How is this number in any way extrapolated to 20 GB and 80 GB?

I’ve been using Realm with Device Sync and that is the only accessor of the database. No one was answering it, and seeing this thread makes more sense now. It seems a bit suspicious and it seems like they are inflating numbers or something else is going on.

1 Like

Hi Sesan_Chang

Thank you for taking the time to post this message and for corresponding with me over Direct Messages. After looking over your workload, it was found that you were running a lot of unindexed queries which was resulting in higher than expected RPUs. Since you have now moved to a dedicated cluster, you will not incur RPUs. However, it is critical that you use indexes to ensure that the database runs optimally under heavy loads. Please take a look at this guide to help you optimize your performance.

Hi Akansh

Thanks for taking the time to let us know about your issue. When you say “no one was willing to answer”, did you create a separate post? Here’s a good resource on data transfer costs . Data transfer costs depend on how much data you are reading and writing from/to the database and from where (same region, different region, over the internet). Here’s another resource from AWS that talks more about this

Also, if you’d like to discuss this further, please create a new post. We’d like discussions on a post to be relevant to the original post. Thank you for your understanding.

Sure, sorry about that. I made a separate post.

1 Like

Thank you for this thread, wish I’d seen it before the migration. We’ve been burned by Serverless twice already, with $800 monthly invoice because of a missing index. I looked for a way to set a monthly spending limit, but couldn’t find it, crazy that there isn’t one. Makes me want to migrate back immediately, can’t recommend anyone to go Serverless on Atlas.

1 Like

Hi Simon

I am sorry to hear about your issue and about the experience you’ve had. Thank you for providing more details over DM. I’m pasting suggested solutions here for other users that may face similar issues:

  1. You can look into creating alerts (link 1, link 2). Some alerts that I think could be of interest to you are the ones related to Read Units and Write Units
  2. You can set up billing alerts to get notified when your bill crosses a specified amount
  3. You can check out the “Performance Advisor” and the “Porfiler” tabs on Atlas for index recommendations and to investigate slow queries respectively.
1 Like

Wow, this is just nuts. The pricing page shows that serverless would be the go-to for small apps with very small reads/writes per day. I guess I’m sticking with the dedicated m10 cluster or switching to a different database provider or host my own instance.

I was going to start using serverless, but it seems if my team doesnt index well the prices will skyrocket.

My question will be if Is there a server configuration to set the limits for your project. And shutdown if it exceeds? Maybe the alerts would work but its not for sure.

Because to get 60$ per day for testing on a db instance is just crazy.

Hi Paola_Segura

Can you please confirm where you got the “$60” figure from?

We currently do not have the ability to shut down the database if your bill exceeds a certain amount. However, you can set billing alerts that alert you when the bill exceeds a certain amount.

Thanks for your response, the 60$ figure is taking from one case that mentioned above by Nathan_Shields.

I’m just evaluating right now.

We recently launched a product with a couple of active users but we’re already hit by a WPU cost of $329.59 (does indexing really have an effect?) plus an RPU cost of $86.42 (with correct indexes) totaling $421.84 in just 1 month. This is not sustainable for a startup, we’re intrigued by the fact that Serverless instances allow us to start small and save cost, but the reverse is the case.

Hi Bilyamin_Adam

Thanks for the question. I see that you’ve also created another post here. I’ve already responded to it and have sent you a direct message (2 days ago) to gain a better understanding of your workload as I’d like to understand what could be driving the high cost. Please reply to my DM and we can go from there. Thanks

Do you plan to implement a hard spending limit in the future or are you hindered to do that because of using the major cloud providers AWS, Azure, GCP?

And how soon would a billing alert be sent in case there was a huge spike in usage and thereby costs?

From reading the above comments and also discovering in the comments that there was a bug that prevented indexes from being used even though they were declared I am not at all comfortable using the serverless pricing option until there is a definite immediate hard spending limit as I would rather my users experience a slow down or disruption that going bankrupt!

Please let me know if you plan on improving this.

Hi Casper

  1. We do plan on implementing such a feature, however, we don’t have a timeline of when this will be available yet.

  2. The billing alert is based on your usage on the prior day. Therefore, it will be sent out the day after your usage. If you do not wish to wait a day, you can monitor your usage in real time using your monitoring tools as we have a chart that shows read and write units per second. Furthermore, you can also use MongoDB’s “explain command” to get an understanding of RPUs incurred per query before deploying your workload. All of these tools can be used to ensure that you do not spend more than you plan on spending and in case you do, you can contact our support team or reach out to me.

  3. We have continued to improve our serverless offering in the last few months. I do want to point out that the comments on this thread have been posted before a lot of the changes went in. About the bug that you alluded to, you can see the thread here for yourself. The poster went away and did not get back to us with additional information. It seems more like a user error, because if this was a bug, we would have seen multiple customers complain about this issue either on that post or on other posts on this website.

If you need additional assistance with your serverless instance, please don’t hesitate to DM me.