MongoDB can incorporate any kind of data – any structure, any format, any source – no matter how often it changes. Your analytical engines can be comprehensive and real-time. In this video, you'll learn how MongoDB provides a data platform that enables data ingestion, data preparation, and AI model operationalization to power real-time analytics which can make huge impacts for the financial industry and beyond.
- Data Ingestion: Leveraging KAFKA connector, collect and manage raw data from your IoT, logs, and other various semi-structured or unstructured data streams. MongoDB schema on-read design provides superior agility to capture various data constructs hence increasing the delivery velocity of your analytic project.
- Preparation: Using MongoDB native visualization tool Compass, your data analyst can explore the dataset and leverage MongoDB aggregation pipeline to perform data wrangling operations. Furthermore, MongoDB SPARK connector enables your data scientists to read and write data into Spark and leverage AI libraries like MLlib and Tensorflow to create AI models.
- AI model operationalization: Once your AI model is complete, score your MongoDB operational layer by adding the results of your model. For instance a user profile is augmented with a propensity score that in turn is leveraged by your application enabling personalized interactions. Furthermore MongoDB changeStream functionality and KAFKA connector enables real time event-aware, pushing results and alerts to your applications.
MongoDB using KAFKA and Spark connectors augments the efficiency of your AI devops team while contributing to shortening time to value of your AI project.