Tier increase via Autoscaling after implementing FullDocumentBeforeChange


Recently as part of a project by our team to implement Mongo Change Streams on one of our collections, i’ve noticed that our Project has been autoscaling from M20 → M30 more and more frequently. When implementing Mongo CS, we ran a query to add in the FullDocumentBeforeChange for each MongoCS event (so it contains now the FullDocument and the FullDocumentBeforeChange for each update).

Is it possible that including this additional feature is leading to the increase in autoscaling between M20 → M30? We used to sit more regularly at M10 and occasionally the autoscaling would increase to M20.

Are there other things I should investigate? if so if you had any advice that would be appreciated.

We are on Atlast and version 6.0.6

Hi @Paul_Chynoweth,

I suspect that you might be currently operating close to the limit of what M20 provides, which may be causing you to reach the M30 tier due to further resource usage. To learn more, please refer to the Auto-Scaling documentation.

  • May I ask if you notice any patterns when the autoscale happens, such as at a certain time of day or under specific workloads (e.g., analytics)?

  • As you mentioned that you experienced the same issue with M10M20, are you observing any similar usage patterns when that happened as well?

However, in the current implementation as per MongoDB 6.0.8, the FullDocumentBeforeChange feature, allows you to store pre-images, which are written to the config.system.preimages collection. This could consume additional resources and may impact your autoscaling in some cases, depending on your workload, especially if you are on the edge of your cluster capacity and experience an influx of additional workload that triggers autoscaling.

Also as per the Change Stream documentation, you can limit the size of config.system.preimages collection and set an expireAfterSeconds time for the pre-images to prevent it from becoming too large.

Hope the above helps! In case you need further assistance, I suggest reaching out to MongoDB Atlas support as they have better insights into your cluster.


1 Like