Hello Everyone,
I’m Emmanuel Katto, I’ve been working with time-series data and recently encountered an issue related to bucketing when handling large arrays. Specifically, I was loading observation data that included two large arrays, each containing 7200 doubles. After the data load, I noticed that the buckets were mapped 1 to 1 with the documents. When I removed the arrays, the time-series bucketing worked as expected. However, reintroducing the arrays resulted in a return to the 1 document to 1 bucket scenario.
I’m aware of the 16MB per document limit, so I measured a document containing both arrays, which amounted to 0.59MB. Given this, I’m unsure if this behavior is a bug or a design limitation.
Has anyone else experienced similar issues when working with large arrays in time-series collections? If so, is this behavior expected, or could it be a bug? Any insights or suggestions on how to manage this would be greatly appreciated.
Looking forward to your thoughts and advice!
Thank you in advance for your help!
Regards
Emmanuel Katto