A data lake is a central repository where a company stores tons of data hoping to be able to get some value out of it later.
According to Gartner's 2019 study, most data lake projects failed mainly due to complexity and unkept promises of performance and usage.
On the other hand, deploying and running large scale critical databases has never been easier thanks to MongoDB’s Atlas.
You can now unleash the power of MongoDB Query Language and Aggregation framework on your analytical data or archived data using Atlas Datalake, removing complexity and expanding operational usage to vast amounts of data.
This session will show you real world scenarios of cross-database use cases, from MongoDB sub millisecond operational queries to huge datalake queries, without the burden of setting up a complex environment or using exotic languages and software to query the data.
In particular, we will explore and demonstrate the tight relationship and the automatic data transfer between operational database and datalake.
MongoDB Atlas will reconcile you with data lakes!