New2025 wrap-up: Voyage AI, AMP launch, & customer wins. Plus, 2026 predictions. Read blog >
NewBuild better RAG. Voyage 4 models & Reranking API are now on Atlas. Read blog >
NewIntroducing Automated Embedding: One-click vector search, no external models. Read blog > Hyperlink: Read blog >
Blog home
arrow-left

Announcing Kinesis Support for MongoDB Atlas Stream Processing

January 7, 2026 ・ 3 min read

AWS Kinesis Data Streams enable the capture of events from applications, IoT devices, clickstreams, or logs. For those building on MongoDB, getting streaming data into MongoDB Atlas for operational queries or enriching Kinesis streams with data from MongoDB collections presented challenges. Connecting these systems required custom integration code, ongoing maintenance, and deep expertise in both platforms. 

The latest enhancement to MongoDB Atlas Stream Processing addresses these issues by introducing a native integration with Kinesis Data Streams.

You can now:

  • Read from Kinesis streams, process events, and write to MongoDB Atlas.
  • Stream changes from MongoDB Atlas collections directly to Kinesis for downstream AWS services.
  • Build transformation pipelines that read from Kinesis, enrich with MongoDB Atlas data, and write back to Kinesis.
  • Secure all connections with IAM Assume Role and PrivateLink endpoints.

This post walks through the technical details of the integration, providing code examples, highlighting key security features, and illustrating end-to-end workflow patterns.

Reading from Kinesis Data Streams

With Kinesis Data Streams as a source, MongoDB Atlas Stream Processing can consume event streams for immediate processing and analysis. This is particularly useful for applications requiring low-latency insights from constantly flowing data.

Figure 1. Reading from Kinesis (source): MongoDB Atlas Stream Processing consumes event streams for real-time processing and validation.

 

JavaScript

This configuration specifies the Kinesis stream name, AWS region, and consumer ARN for authentication. MongoDB Atlas Stream Processing currently supports the Enhanced Fan-Out consumer, which provides dedicated throughput for each stream processor.

Writing to Kinesis Data Streams

The MongoDB Atlas Stream Processor can also act as a sink, allowing you to write processed data back to Kinesis Data Streams. This enables seamless integration with other AWS services that consume data from Kinesis, as well as for building complex data pipelines.

Figure 2. Writing to Kinesis (sink): MongoDB Atlas Stream Processing streams processed data to AWS Kinesis Data Streams.

 

JavaScript

This configuration specifies the output stream, AWS region, and a partitionKey. Kinesis uses the partitionKey to distribute records across shards, so choosing an appropriate key (like user ID or device ID) ensures even distribution within each partition.

End-to-end workflows

The following workflows illustrate common patterns for integrating Kinesis with Atlas Stream Processing in production scenarios.

Figure 3. Kinesis to MongoDB Atlas workflow: Ingesting and validating event streams for operational queries.

kinesis-to-atlas workflow

 

JSON

This workflow ingests event streams from Kinesis, validates and transforms them in near-real time, then stores the processed data in MongoDB Atlas for analysis and application queries.

Figure 4. MongoDB Atlas to Kinesis workflow: Streaming MongoDB changes to downstream AWS services.

atlas-to-kinesis workflow

JSON

This workflow streams changes from a MongoDB Atlas collection through MongoDB Atlas Stream Processing to Kinesis, enabling integration with downstream AWS services like Lambda and SageMaker, or analytics pipelines. 

Figure 5. Kinesis to Kinesis workflow: Enriching event streams with MongoDB data before forwarding to other services.

kinesis-to-kinesis workflow

 

JSON

This workflow reads from one Kinesis stream, enriches events with data from MongoDB Atlas, and then writes the transformed results to a separate Kinesis stream.

Robust security features

IAM AssumeRole

The Kinesis integration uses AWS IAM AssumeRole for authentication. Instead of storing long-lived AWS credentials in MongoDB Atlas Stream Processing, you configure an IAM role with the necessary permissions (e.g., kinesis:GetRecords, kinesis:PutRecords). MongoDB Atlas Stream Processing assumes this role when connecting to Kinesis, following the principle of least privilege. You can use the AWS Unified Access feature in MongoDB Atlas to configure the trust relationship.

 

JSON

PrivateLink endpoint support

If you have stringent network security requirements, MongoDB Atlas Stream Processing supports PrivateLink endpoints for Kinesis Data Streams integration. This keeps your connections to Kinesis within the AWS network without traversing the public internet.

Start building stream processing pipelines today

Kinesis integration removes the integration tax between your AWS event streams and MongoDB Atlas. Whether you're ingesting IoT telemetry at scale, building change data capture pipelines to feed SageMaker or Lambda, or enriching event streams with operational data from MongoDB Atlas, you can now build these workflows without writing or maintaining custom connector code.

megaphone
Next Steps

Ready to get started? Refer to our documentation for detailed instructions on configuring your stream processing pipeline.

MongoDB Resources
Atlas Learning Hub|Customer Case Studies|AI Learning Hub|Documentation|MongoDB University