Data size is larger than expected

The overview of my cluster shows around 100 MB size:


But when I tap into the detailed graph the size is over 500 MB

100 MB is closer to what I expect give the data i’m storing:

What is the source of the extra storage?

Hi @Harry_Netzer1,

I suspect based off the timing of where the logical size starts to increase from 0.0B in your first screenshot, the screenshots were taken quite recently after some type of bulk insert.

I believe the cause of this is due to the granularity of the metrics shown in your first screenshot (across the last 30 days with a granularity of 1 hour).

100 MB is closer to what I expect give the data i’m storing:

In your third screenshot I am seeing STORAGE SIZE (compressed) which i believe differs to LOGICAL DATA SIZE. Please see the below screenshot which highlights both from my test environment:

You can find some more details regarding this here.

Is your UI still displaying the logical size difference between the first screenshot and the detailed view (second screenshot)? If you are still seeing this difference in logical size from the two metrics views, I would raise this with the Atlas chat support team as they would have more insight to your Atlas project / cluster in question.

Regards,
Jason

1 Like

Thank you Jason. I am not seeing the difference anymore between the first and second screenshot. I believe you are correct that it was a difference in granularity.

I’m still seeing the difference in the third screenshot. When I look at my individual collections, adding up the total size of each is about 4x less than the logical size listed on the overview. Is there another hidden data somewhere?

For background, I’m copying data from JSON backups into realm. I’m using a small iOS app to read the JSON, decode it into realm objects and sync up to the server using flexible sync. Is this inefficient in any way? Should I instead use the Mongo swift driver to populate my database? Not sure if using realm in this way is creating a lot of extraneous metadata.

That’s good to hear.

I’m still seeing the difference in the third screenshot. When I look at my individual collections, adding up the total size of each is about 4x less than the logical size listed on the overview. Is there another hidden data somewhere?

Regarding the above, can you provide a screenshot from one of the collections and highlight which size you are adding up? I’m curious to see if it’s the LOGICAL DATA SIZE, STORAGE SIZE or INDEX SIZE. The third screenshot you had provided initially does not include LOGICAL DATA SIZE (This value showing up in the UI in my screenshot could have been a more recent change).

Can you also specify the total LOGICAL SIZE you are seeing in the UI as well as all the individual collections LOGICAL DATA SIZE?

Regards,
Jason

2 Likes

Thanks Jason. Here’s my total size showing as more than 500MB:


And here’s all of my collections. If my math is correct, adding Logical Data and Indexes, these add up to 130MB.











1 Like

Thanks @Harry_Netzer1,

Can you try connecting via MongoDB compass and checking the available databases? I am wondering if you are able to see a __realm_sync database and if so, what the size of it would be. To my knowledge this database won’t be able to be seen via Data Explorer which is why I suggested MongoDB Compass.

For your reference, my current theory for what may be consuming the storage and not being seen in the Atlas Data Explorer is related to the following post : __realm_sync history taking up all the storage on Atlas cluster

Regards,
Jason

1 Like

Thanks Jason. I am seeing this __realm_sync database with some largish tables:

Is the next step emailing Ian Ward to enable compaction? Thanks for your help!

1 Like

Hi @Harry_Netzer1,

If you terminate / re-enable sync, it should rebuild the sync history.

However, as the sync history will then begin to grow again, you may hit this limit once more. Should you find your application often hitting this limit, then it might be best to consider upgrading to a higher tier cluster.

Regards,
Jason

2 Likes

This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.