Manage large update without replica lag

I have a personalisation service which generates 10 GB of data per hour in json format, the data is mostly got updated frequently with high traffic. What should be the best way to store these data on the DB?

I tried writing it with batch updated, but even with 10% of our platform traffic its giving huge replica alerts.

{
    device_id: "",
    shows: [
        show_id1, show_id2, show_id3
    ]
}

10GB per hour… too much.

Are you sure you want to use a general purpose database to do this kind of ting? they are not designed for such heavy write.

What kind of operaitons you do on those data? you ever search it? Creating indexes on so much data is also a big pain.

maybe you want to try something like a distributed file system.

10GB is largeish but this is completely able to be handled.

Optimizations in schema and how data is updated can streamline updates. I.e. Rewriting a whole document vs updating specific fields.

It is not specified if this is self hosted or Atlas. Identify the bottleneck and address it.

In Atlas this will be selecting a higher tier, self-hosted adding more ram, faster disk.

If scaling up is becomes prohibitive then scale out using sharing to distribute the load amoung multiple mongodb replicates.