Hello @Stennie, thank you very much for your reply!
I apologize for the delayed answer, I’ve tried to retrieve some more information.
My OLTP workload consists of a Real-Time pipeline with Apache Storm as the actor in charge of the writing operations.
The Storm topology, depending on the tuple it is processing, should retrieve a previous message written on the DB through a “find” based on the index keys and, depending on the values of some fields, performes an update on the retrieved data.
We are talking about millions of operations per minute and, on the other hand, there are some analytical workloads performed using the MongoDB Aggregation framework by different instances of Microservices deployed on K8s.
The response time for both the OLTP and OLAP workloads is critical, this is the reason why I was thinking about an in-memory engine deployment.
I hope to have cleared the scenario a bit more, in order to understand if I’m missing any drawbacks that could lead the solution not to be the best in terms of feasibility.