I have read here that a time series bucket can not have more than 1000 measurements/documents inside one bucket, and also here, that it is “by default” 1000 documents per bucket.
Is there a way to increase this limit?
My use case is that I want to have a bunch of small documents each with one measurement of one sensors. The increased number of buckets makes the read query slower compared to other schemas like fitting all sensors measurements at one timestamp into one document.
Hi Zongji! Thank you for your question!
No, 1000 documents per bucket is the maximum for time series. The size can be adjusted automatically to be lower depending on the granularity settings, but it cannot exceed 1000.
If you’re concerned about read performance, I recommend measuring it first with sample data. If you’re still concerned after, consider the following optimizations: adjust the mentioned granularity settings or aggregate multiple measurements before storing them.
Hi!
Thank you for the answer.
I looked into the code repo a bit.
And the commit here suggests that it is possible to configure it via the configuration file on server startup?
Is this not up to date anymore?
Looking at the older versions, this parameter was never documented. You may test it with 5.0 when time series collections were first introduced or with the version from the commit you linked.
However, I generally refrain from using non-documented features.