Customers experience Realm sync error due to read limit of 16777217 bytes

We have a .NET Xamarin.Forms app that utilizes .NET Realm SDK to sync the app to Realm cloud using partition-based synchronization.

We are using M20 Dedicated Cluster for Realm sync with the global deployment model.

Some of our customers started experiencing the following sync error

could not decode next message: error reading body: failed to read: read limited at 16777217 bytes

When I looked for that magic 16777217 number in forums I found several threads without resolution.

As I can understand there’s a 16MB limit somewhere. We are not using images anywhere in the sync process. However, some of our customers have large databases. And we do have a logic of “sharing” the database which essentially is a single transaction that copies the data to the target user’s database and then the Realm Sync picks up and syncs that DB in the background.

I could, in theory, split the data into chunks and use several transactions, but I do not know how to ensure that for an even larger dataset, those individual transactions would not exceed 16MB. I think this issue needs to be addressed at lower levels than simply creating more transactions.

What are best practices in scenarios like this? How do we ensure that we have fluent background synchronization even for the largest databases that could in theory exceed gigabytes?

Can you file a support ticket for this? The support team will be best suited to advise you based on your use case. It’s also possible that you’re hitting some corner case that should be fixed on our end rather than having to modify your usage pattern and they have the tools to recognize that.

Hi @Gagik_Kyurkchyan,

Yes, there is one, and is well documented: a single document in MongoDB cannot exceed the total size of 16777216 bytes. There’s GridFS if you need to, you can find more information in the docs or our Knowledge Base, however GridFS isn’t supported by Realm at this time.

Therefore, Realm isn’t limited to have databases smaller than that, but single documents need to be: there are also advantages in this approach (if, as it regularly happens in mobile, you have a huge transaction that fails half-way, the only possible way to recover is to start the transaction again - and you probably don’t want to waste GB of mobile data over re-attempts).

Can you please explain your use case in detail? As illustrated above, keeping all data in massive documents isn’t common nor advised on mobile, you may be better served with a different architecture.

As advised by @nirinchev, a Support case would be better, so that we can discuss details you may not be willing to share in a public forum.

@nirinchev and @Paolo_Manna thanks for getting back and sharing these details.

We do not have a single document that is larger than 16mb. We may have scenarios when a single transaction is larger than 16MB, and from what I can see a single transaction in Realm is represented by a Single BSON document and that’s the culprit of the issue and that’s the reason there can’t be a transaction that is larger than 16MB.

The use case we have is the following. We have a B2B Realm application. In the app they can have both a synced database that uses partition-based sync, and they can have an offline database with the same schema. The users have the possibility of creating a “Backup” of their Realm database. A backup of a database is simply an offline Realm database into which we copy all of the data from the source database. The copy is performed manually entity by entity bases.

At some point, the users can either share that backup with their colleagues or restore it and overwrite their existing database. Overwriting means deleting all of their existing data and copying over the data we had in the backed-up offline database into their Realm-synced database. And this is where the issue occurs. We do the copy in a single transaction right now. Now I can see that it’s not a good idea if the database is large and we probably need to split the restoration process into multiple transactions.

As for raising a support ticket, I will be happy to do so. Where can I do so? I never did that before. I tried to use support.mongodb.com but I receive the following error

Not Enabled for Support

To access the MongoDB Support Portal, please ensure that you are a member of a supported customer project.

Hi, I believe the error you are running into is actually that we require the WebSocket library to limit messages to 16MB. Interestingly we came across your application while looking through errors occurring in production and are in the process of discussing raising this limit. I will keep you in the loop on what we decide there but having limits on these kinds of things is important for the system to function properly and reliably.

This is happening because you have a single realm transaction that is > 16MB in size when compressed. Ideally, you can chunk out your writes to be a bit smaller and Realm and Device Sync will ensure that smaller transactions are batched up to increase throughput, but unfortunately, we cannot break up a transaction (as that would break the concept of a transaction).

Therefore, the best option for you is to try to update your application to break up these large transactions into smaller ones. In the meantime, we will discuss raising this limit and keep you in the loop. As one added point, having such large transactions is likely to cause issues later on in the upload integration, especially on undersized clusters as we need to apply all of these changes in a single MongoDB transaction.

Best,
Tyler

@Tyler_Kaye thanks, that makes sense, and I do agree that a single web request should not be that large. In fact, I had an assumption that a single transaction gets “streamed” somehow magically, and if I knew before what I know now I wouldn’t create a single transaction for a full database copy.

I am going to work on splitting these transactions into chunks. The strategy I will take is creating “batches” per entity small enough so I will never face this issue again, even if the database is humongous. We will do some load tests and I think we should be good.

Appreciate everybody’s support.

From your Atlas Project page, you have a Support tab where you can activate it: there’s a free trial you can take advantage of, if you never had a Support contract.

1 Like

@Gagik_Kyurkchyan sounds good. We have discussed adding some API that allows us to define split points and break up transactions as we see fit, but right now the API of the realm is that it is a Transaction, and that is a defined boundary that we cannot safely split up without breaking how users might expect changes to be replicated to devices.

Best,
Tyler

1 Like

This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.