A fully managed Atlas service with document model flexibility.
Use the Query API and aggregation framework—a familiar and powerful interface—to handle stream processing.
Available in 11 AWS regions across the U.S., Europe, and APAC with more providers and regions coming soon.
Create time-based windows and other operations for complex, multi-event processing.
Easily connect to your key streaming sources/sinks in Kafka and Atlas, and merge data continuously.
Built-in support for validation to ensure data correctness and intuitive error handling. Use Atlas collections as a dead letter queue (DLQ).
In the event of failure, checkpoints automatically restart stream processors while avoiding unnecessary data reprocessing.
Processing streaming data can be opaque. Use .process() to iteratively explore as you build.
Start with the multi-cloud database service built for resilience, scale, and the highest levels of data privacy and security.
Automatically run code in response to database changes, user events, or on preset intervals.
Natively integrate MongoDB data within the Kafka ecosystem.
Streaming data lives inside of event streaming platforms (like Apache Kafka), and these systems are essentially an immutable distributed log. Event data is published and consumed from event streaming platforms using APIs.
Developers need to use a stream processor to perform more advanced processing, such as stateful aggregations, window operations, mutations, and creating materialized views. These are similar to the operations one does when running queries on a database, except that stream processing continuously queries an endless stream of data. This area of streaming is more nascent; however, technologies such as Apache Flink and Spark Streaming are quickly gaining traction.
With Atlas Stream Processing, MongoDB provides developers with a better way to process streams for use in their applications, leveraging the aggregation framework.
Stream processing happens continuously. In the context of building event-driven applications, stream processing enables reactive and compelling experiences like real-time notifications, personalization, route planning, or predictive maintenance.
Batch processing does not work on continuously produced data. Instead, batch processing works by gathering data over a specified period of time and then processing that static data as needed. An example of batch processing is a retail business collecting sales at the close of business each day for reporting purposes and/or updating inventory levels.