MongoDB Updates

The newest releases and freshest updates

MongoDB Connector for Apache Kafka 1.4 Available Now

As businesses continue to embrace event-driven architectures and tackle Big Data opportunities, companies are finding great success integrating Apache Kafka and MongoDB. These two complementary technologies provide the power and flexibility to solve these large scale challenges. Today, MongoDB continues to invest in the MongoDB Connector for Apache Kafka releasing version 1.4! Over the past few months, we’ve been collecting feedback and learning how to best help our customers integrate MongoDB within the Apache Kafka ecosystem. This article highlights some of the key features of this new release. Selective Replication in MongoDB Being able to track just the data that has changed is an important use case in many solutions. Change Data Capture (CDC) has been available on the sink since the original version of the connector. However, up until version 1.4, the source for CDC events could only be sourced from MongoDB via the Debezium MongoDB Connector. WIth the latest release you can specify the MongoDB Change Stream Handler on the sink to read and replay MongoDB events sourced from MongoDB using the MongoDB Connector for Apache Kafka. This feature enables you to record insert, update, and delete activities on a namespace in MongoDB and replay them on a destination MongoDB cluster. In effect you have a lightweight way to perform basic replication of MongoDB data via Kafka. Let’s dive in and see what is happening under the hood. Recall that when the connector is used as a source to MongoDB, it starts a change stream on a specific namespace. Depending on how you configure the source connector, documents are written into a Kafka topic based on this namespace and pipeline that match your criteria. These documents are by default in the change stream event format . Here is a partial message in the Kafka topic that was generated from the following statement: db.Source.insert({proclaim: "Hello World!"}); { "schema": { "type": "string", "optional": false }, "payload": { "_id": { "_data": "82600B38...." }, "operationType": "insert", "clusterTime": { "$timestamp": { "t": 1611348141, "i": 2 } }, "fullDocument": { "_id": { "$oid": "600b38ad6011ef6265c3acd1" }, "proclaim": "Hello World!" }, "ns": { "db": "Tutorial3", "coll": "Source" }, "documentKey": { "_id": { "$oid": "600b38ad6011ef6265c3acd1" } } } } Now that our change stream message is in the Kafka topic, we can use the connector as a sink to read the stream of messages and replay them at the destination cluster. To set up the sink to consume these events, set the “change.data.capture.handler" to the new com.mongodb.kafka.connect.sink.cdc.mongodb.ChangeStreamHandler property. Notice that one of the fields is “operationType”. The sink connector will only support insert, update and delete operations on the namespace and does not support actions like creation of database objects such as users, namespaces, indexes, views, and other metadata that occurs in more traditional replication solutions. In addition this capability is not intended as a replacement for a full featured replication system as it can not guarantee transactional consistency between the two clusters. That said, if all you are looking to do is move data and can accept its lack of consistency then you have a simple solution using the new ChangeStreamHandler. To work through a tutorial on this new feature, check out Tutorial 3 of the MongoDB Connector for Apache Kafka Tutorials in GitHub . Dynamic Namespace Mapping When we use the MongoDB connector as a sink we take data that resides on a Kafka Topic and insert it into a collection. Prior to 1.4, once this mapping is defined it isn’t possible to route topic data to another collection. In this release we added the ability to dynamically map a namespace to the contents of the kafka topic message. For example, consider a Kafka Topic “Customers.Orders” that contains the following messages: {"orderid":1,"country":"ES"} {"orderid":2,"country":"US"} We would like these messages to be placed in their own collection based upon the country value. Thus, the message with the field “orderid” that has a value of 1 will be copied in a collection called, “ES”. Likewise, the message with the field “orderid” that has a value of 2 will be copied to a collection called, “US”. To see how we configure this scenario, we will define a sink using the new namespace.mapper property configured with a value of “ com.mongodb.kafka.connect.sink.namespace.mapping.FieldPathNamespaceMapper ”. Using this mapper, we can use a key or value field to determine the database and collection respectively. In our example above let’s define our config using the value of the country field as the collection name to sink to: '{"name": "mongo-dynamic-sink", "config": { "connector.class":"com.mongodb.kafka.connect.MongoSinkConnector", "topics":"Customers.Orders", "connection.uri":"mongodb://mongo1:27017,mongo2:27017,mongo3:27017", "database":"Orders", "collection":"Other" "value.converter":"org.apache.kafka.connect.json.JsonConverter", "value.converter.schemas.enable":"false", "namespace.mapper":"com.mongodb.kafka.connect.sink.namespace.mapping.FieldPathNamespaceMapper", "namespace.mapper.value.collection.field":"country" }} Messages that do not have a country value will by default be written to the namespace defined in the configuration just like they would have been without the mapping. However, If you want messages that do not conform to the map to generate an error simply set the property namespace.mapper.error.if.invalid to true. This will raise an error and stop the connector when messages can not be mapped to a namespace due to missing fields or fields that are not strings. If you’d like to have more control over the namespace you can use the new “getNamespace” method of the interface com.mongodb.kafka.connect.sink.namespace.mapping.NamespaceMapper . Implementations of this method can implement more complex business rules and can access the SinkRecord or SinkDocument as part of the logic to determine the destination namespace. Dynamic Topic Mapping Once the source connector is configured, change stream events flow from the namespace defined in the connector to a Kafka Topic. The name of the Kafka Topic is made up of three configuration parameters: topic.prefix, database and collection. For example, if you had as part of your source connector configuration: “topic.prefix”:”Stocks”, “database”:”Customers”, “collection”:”Orders” The Kafka topic that would be created would be “Stocks.Customers.Orders”. However, what if you didn’t always want the events in the Orders collection to always go to this specific topic? What if you wanted to determine at run-time which topic a specific message should be routed to? In 1.4 you can now specify a namespace map that defines which kafka topic a namespace should be written to. For example, consider the following map: {"Customers": "CustomerTopic", "Customers.Orders": "Orders"} This will map all change stream documents from the Customers database to CustomerTopic.<collectionName> apart from any documents from the Customers.Orders namespace which map to the Orders topic. If you need to use complex business logic to determine the route, you can implement the getTopic method in the new TopicMapper class to handle this mapping logic. Also note that 1.4 introduced a topic.suffix configuration property in addition to the topic.prefix. Using our example above, you can configure “topic.prefix”:”Stocks”, “database”:”Customers”, “collection”:”Orders”, topics.suffix:”US” This will define the topic to write to as “Stocks.Customers.Orders.US” Next Steps Download the latest MongoDB Connector for Apache Kafka 1.4 from the Confluent Hub ! Read the MongoDB Connector for Apache Kafka documentation Questions/Need help with the connector? Ask the Community Have a feature request? Provide Feedback or a file a JIRA

February 9, 2021
Updates

MongoDB Atlas Online Archive for Data Tiering is now GA

We’re thrilled to announce that MongoDB Atlas Online Archive is now Generally Available. With Online Archive, you can seamlessly tier your data across Atlas clusters and fully managed cloud object stores, gaining the flexibility to set the perfect price to performance ratio across your data. Eliminate the need to manually migrate or delete valuable data. Simply set a rule on your Atlas cluster to automate data archival while retaining easy access to query all your data using a single connection string. With this capability, you can bring new and previously cost-prohibitive use cases onto MongoDB Atlas , our first-class managed offering, and manage your entire data lifecycle without replicating or migrating it across multiple systems. What is Atlas Online Archive? Online Archive is a fully managed data tiering solution that allows you to tier data across your "hot" database storage layer and "colder" cloud object storage to maintain queryability while optimizing on cost and performance. Online Archive is a good fit for many different use cases, including: Insert heavy workloads, where data is immutable and has lower performance requirements as it ages Historical log keeping and time-series datasets Storing valuable data that would have otherwise been deleted using TTL indexes We’ve received amazing feedback from the community over the past few months while the feature was in beta and we’re now confident in supporting your production workloads. Our users have put the feature through a variety of use cases in production and development workloads which has enabled us to make a wide range of improvements. Online Archive gives me the flexibility to store all of my data without incurring high costs, and feel safe that I won't lose it. It's the perfect solution. Ran Landau, CTO, Splitit Autonomous Archival Management It's easy to get started with Online Archive and it requires no ongoing maintenance once it’s been set up. In order to activate the feature, you can follow these simple steps: Navigate to the “Online Archive” tab on your cluster card and begin the setup flow. Set an archiving rule by selecting a date field, with dot-notation if it’s nested, or creating a custom filter. Choose commonly queried fields that you want your archival queries to be optimized for, with a few things in mind: Your data will always be “partitioned” by the date field in your archive, but can be partitioned by up to two additional fields as well. The fields that you most commonly query should be towards the top of the list (date can be moved to the top or bottom). Query fields should be chosen carefully as they cannot be changed after the fact and will have a large impact on query performance. Avoid choosing a field that has unique values as it will have negative performance impacts for queries that need to scan lots of data. And you’re done! MongoDB Atlas will automatically move data off of your cluster and into a more cost-effective storage layer that can still be queried with a single connection string that combines cluster and archive data, powered by Atlas Data Lake . What's Next? Along with announcing Online Archive as Generally Available, we’re excited to share a few additional product enhancements which should be available in the coming months: Custom filters for your archival rules using a non-date based field Support for BYO Key Encryption on your archival data A dedicated connection string for archive-only queries Support for additional time formats Improved performance and stability Try Atlas Online Archive Online Archive allows you to right-size your Atlas clusters by storing hot data that is regularly accessed in live storage and moving colder data to a cheaper storage tier. Billing for this feature will include the cost to store data in our fully managed cloud object storage and usage based pricing for querying archive data. We can’t wait to see what new workloads you’ll bring onto MongoDB Atlas with the new flexibility provided by Online Archive! To get started, sign up for an Atlas account and deploy any dedicated cluster (M10 or higher). Have questions? Check out the documentation or head over to our community forums to get answers from fellow developers. And if we’re missing a feature you’d like to see, please let us know ! Safe Harbor Statement The development, release, and timing of any features or functionality described for MongoDB products remains at MongoDB's sole discretion. This information is merely intended to outline our general product direction and it should not be relied on in making a purchasing decision nor is this a commitment, promise or legal obligation to deliver any material, code, or functionality. Except as required by law, we undertake no obligation to update any forward-looking statements to reflect events or circumstances after the date of such statements.

November 30, 2020
Updates

New Ways to Customize Your Charts

When it comes to building charts, we know that details matter. Small differences in layout, styling or composition can make a big difference in how well your chart communicates the story behind your data. That’s why we’ve just released a whole bunch of new capabilities in MongoDB Charts , giving you more control than ever. Here’s what’s new: Secondary Y Axis: Charts can be a great way to show correlation between two different datasets, but when their scales differ greatly it can be hard to see the correlation. By choosing to plot one more series on a secondary Y Axis, you can allow them to make the most of the available space and highlight any interesting relationships. Secondary Y Axis can be enabled on Grouped Column, Discrete Line, Continuous Line and Continuous Area charts. Legend Position: Chart legends can now be moved to the top, right or bottom of your chart, or hidden altogether. “All Others” Group: Charts has long allowed you to limit a chart to show, say, just the top 10 values. The new “All Others” option allows you to add an additional bar or donut segment that shows the value of all other categories not included in the limit. “Count by Value” aggregation: Building multi-series charts is now easier than ever, with the new “Count by Value” aggregation option. This will automatically create series from each distinct value found in a field. String binning with Regular Expressions: Last month we introduced binning of string values, allowing you to choose the exact values to go into each bin. This month we’ve extended this further by allowing you to use Regular Expressions to assign values to a bin based on powerful patterns. Scatter Mark formatting: We’ve ramped up the customization options available on Scatter charts, allowing you to control the size, border thickness and opacity of each plotted mark. Line Dash Styles: A new option on Discrete and Continuous Line charts results in a different dash style for each series, making it easier to differentiate the series and improve the accessibility of your charts. Here’s one example of a chart that shows off the secondary Y axis, custom legend position and line dash styles: And here’s another, showing the effect you can get by customizing your scatter chart’s mark style: We hope you enjoy these new charting capabilities, but we’re not done yet! Over the next couple of months, we’ll be moving our focus to Table charts, adding options like conditional formatting, text wrapping and column pinning. If you have any other ideas for new customization features, please let us know using the MongoDB Feedback Engine . If you haven’t tried Charts yet, you can get started for free by signing up for MongoDB Atlas and deploying a free tier cluster.

November 18, 2020
Updates

Client-Side Field Level Encryption is now on Azure and Google Cloud

We’re excited to announce expanded key management support for Client-Side Field Level Encryption (FLE). Initially released last year with Amazon’s Key Management Service (KMS), native support for Azure Key Vault and Google Cloud KMS is now available in beta with support for our C#/.Net, Java, and Python drivers. More drivers will be added in the coming months. Client-Side FLE provides amongst the strongest levels of data privacy available today. By expanding our native KMS support, it is even easier for organizations to further enhance the privacy and security of sensitive and regulated workloads with multi-cloud support across ~80 geographic regions. My databases are already encrypted. What can I do with Client-Side Field Level Encryption? What makes Client-Side FLE different from other database encryption approaches is that the process is totally separated from the database server. Encryption and decryption is instead handled exclusively within the MongoDB drivers in the client, before sensitive data leaves the application and hits the network. As a result, all encrypted fields sent to the MongoDB server – whether they are resident in memory, in system logs, at-rest in storage, and in backups – are rendered as ciphertext. Neither the server nor any administrators managing the database or cloud infrastructure staff have access to the encryption keys. Unless the attacker has a compromised DBA password, privileged network access, AND a stolen client encryption key, the data remains protected, securing it against sophisticated exploits. MongoDB’s Client-Side FLE complements existing network and storage encryption to protect the most highly classified, sensitive fields of your records without: Developers needing to write additional, highly complex encryption logic application-side Compromising your ability to query encrypted data Significantly impacting database performance By securing data with Client-Side FLE you can move to managed services in the cloud with greater confidence. This is because the database only works with encrypted fields, and you control the encryption keys, rather than having the database provider manage the keys for you. This additional layer of security enforces an even finer-grained separation of duties between those who use the database and those who administer and manage the database. You can also more easily comply with “right to erasure” mandates in modern privacy legislation such as the GDPR and the CCPA . When a user invokes their right to erasure, you simply destroy the associated field encryption key and the user’s Personally Identifiable Information (PII) is rendered unreadable and irrecoverable to anyone. Client-Side FLE Implementation Client-Side FLE is highly flexible. You can selectively encrypt individual fields within a document, multiple fields within the document, or the entire document. Each field can be optionally secured with its own key and decrypted seamlessly on the client. To check-out how Client-Side FLE works, take a look at this handy animation. Client-Side FLE uses standard NIST FIPS-certified encryption primitives including AES at the 256-bit security level, in authenticated CBC mode: AEAD AES-256-CBC encryption algorithm with HMAC-SHA-512 MAC. Data encryption keys are protected by strong symmetric encryption with standard wrapping Key Encryption Keys, which can be natively integrated with external key management services backed by FIPS 140-2 validated Hardware Security Modules (HSMs). Initially this was with Amazon’s KMS, and now with Azure Key Vault and Google Cloud KMS in beta. Alternatively, you can use remote secure web services to consume an external key or a secrets manager such as Hashicorp Vault. Getting Started To learn more, download our Guide to Client-Side FLE . The Guide will provide you an overview of how Client-Side FLE is implemented, use-cases for it, and how it complements existing encryption mechanisms to protect your most sensitive data. Review the Client-Side FLE key management documentation for more details on how to configure your chosen KMS. Safe Harbor The development, release, and timing of any features or functionality described for our products remains at our sole discretion. This information is merely intended to outline our general product direction and it should not be relied on in making a purchasing decision nor is this a commitment, promise or legal obligation to deliver any material, code, or functionality.

November 9, 2020
Updates

Introducing Multi-Cloud Clusters on MongoDB Atlas

One of the core pillars of MongoDB is the freedom to run anywhere. Since 2017, organizations have been able to use MongoDB Atlas , our fully managed global cloud database, across 70+ regions on the cloud provider of their choice: AWS, Azure, or Google Cloud. We’re increasingly seeing customers run independent workloads on different clouds — a common practice among enterprises with different applications and business units. However, we believe the real power of multi-cloud applications is yet to be realized in our industry. So today, I’m proud to announce that multi-cloud clusters are generally available on MongoDB Atlas! With this groundbreaking capability, customers can distribute their data in a single cluster across multiple public clouds simultaneously, or move workloads seamlessly between them. Data — traditionally the hardest piece of an application stack to move — is now the easiest. A New Multi-Cloud Paradigm More organizations are moving towards a multi-cloud model , and they want the freedom and flexibility to use the best of each cloud provider for any and every application. The question is how engineering teams can do this efficiently and deliberately while dealing with challenges such as incompatible operations and the effects of data gravity . Read our eBook, Why the World is Going Multi-Cloud , for a high-level guide to today's fast-emerging cloud architecture. Download Now With multi-cloud clusters on MongoDB Atlas, customers can realize the benefits of a multi-cloud strategy with true data portability and a simplified management experience. Developers no longer have to deal with manual data replication, and businesses can focus their technical resources on building differentiated software. This opens up a whole new set of possibilities that were previously difficult ― if not impossible ― to achieve, from being able to use best-of-breed services across multiple platforms to data mobility and cross-cloud resiliency. Use best-in-class technology across multiple clouds in parallel Developer productivity is critical to a company’s success, and CTOs know that enabling their teams to choose the best technology available is a major contributing factor. With MongoDB Atlas, developers get more freedom in deciding what building blocks to use, regardless of which cloud is storing application data. Some examples of popular cloud services that our customers like to use include AWS Lambda , Google Cloud AI Platform , and Azure Cognitive Services. With multi-cloud clusters, developers can now run operational and analytical workloads using different cloud tools on the same dataset, with no manual data replication required. Migrate workloads across cloud environments seamlessly Data mobility is another reason companies want a multi-cloud strategy. The world is constantly changing, and businesses never know if, or how, their cloud requirements are going to change. They may face mergers and acquisitions, be subject to new regulatory controls for data portability, find themselves in direct competition with a cloud provider, or find significant cost savings on another platform. With MongoDB Atlas, organizations can future-proof their applications and have the option to move them from one cloud to another if needed, without undergoing a costly data migration. Our built-in automation seamlessly handles cross-cloud data replication on a rolling basis so applications stay online and available to end-users. Improve high availability with cross-cloud redundancy Any business with a mission-critical or user-facing application knows that downtime is unacceptable. Cloud disruptions vary in severity, from temporary capacity constraints to full-blown outages, and organizations need to mitigate as much risk as possible. By distributing data across multiple clouds, they can improve high availability and application resiliency without sacrificing latency. MongoDB Atlas extends the number of locations available by allowing users to choose from any of the nearly 80 regions available (with more coming) across AWS, Azure, and Google Cloud — the widest selection of any cloud database on the market. This is particularly relevant for businesses that must comply with data sovereignty requirements , but have limited deployment options due to sparse regional coverage on their primary cloud provider. In some cases, only one in-country region is available, leaving users especially vulnerable to disruptions in cloud service. For example, AWS and Google Cloud each offer only one region in Canada. With multi-cloud clusters, organizations can take advantage of both regions, and add additional nodes in the Azure Toronto and Quebec City regions for extra fault tolerance. With MongoDB Atlas, customers no longer need to make a trade-off between availability and compliance. Reach more users with flexible deployment options In order to deliver a world-class application experience, organizations must at a minimum meet end-user requirements for their products and services. For SaaS providers and B2C businesses, this may include cloud provider preferences or regional availability. While each of the cloud providers offer a large and growing list of regions globally, their data centers are still heavily concentrated in the USA, Europe, and eastern Asia. If multinational enterprises want to reach local users in other areas, they may not always find coverage on a single cloud. For example, AWS is the only provider to offer a cloud region in Bahrain, Azure Oslo is the only option in Norway, and only Google Cloud has data centers in Indonesia. To capture more global market share, companies may need a multi-cloud strategy to meet customers where they are. An Integrated, More Secure Cloud Data Platform MongoDB has consistently delivered innovations in the data management experience, including automated data tiering with Atlas Online Archive , integrated full-text search with Atlas Search , and Client-Side Field Level Encryption (FLE) for some of the strongest levels of data privacy available today. Client-Side FLE currently works with AWS Key Management Service (KMS), and will soon offer beta support for Azure Key Vault and Google Cloud KMS. With this expansion, it will be easier for organizations to further enhance the privacy and security of sensitive and regulated workloads across all major public cloud platforms. Read our guide to learn more about how Client-Side Field Level Encryption protects data in MongoDB. Download Now Multi-Cloud Data Management Made Easy Multi-cloud distribution can be enabled for both new and existing clusters starting today via the Atlas UI. Multi-cloud clusters come with all the features that our customers know and love, including built-in security defaults , fully managed backup and restores , automated patches and upgrades , intelligent performance advice , and more. While multi-cloud clusters are generally available, we are planning on releasing more capabilities in the coming months to deliver even more value to you. Whether you’re a startup just getting off the ground or a global enterprise in the midst of a multi-year cloud transformation initiative, our multi-cloud database solution abstracts away the toughest roadblock to unlocking your multi-cloud strategy. When your data can travel across clouds, there’s no limit to what you can build. Let us know where multi-cloud clusters on MongoDB Atlas take you - or tell us what you need to get there .

October 20, 2020
Updates

1Data - PeerIslands Data Sync Accelerator

Today’s enterprises are in the midst of digital transformation, but they’re hampered by monolithic, on-prem legacy applications that don’t have the speed, agility, and responsiveness required for digital applications. To make the transition, enterprises are migrating to the cloud. MongoDB has partnered with PeerIslands to develop 1Data, a reference architecture and solution accelerator that helps users with their cloud modernization. This post details the challenges enterprises face with legacy systems and walks through how working with 1Data helps organizations expedite cloud adoption. Modernization Trends As legacy systems become unwieldy, enterprises are breaking them down into microservices and adopting cloud native application development. Monolith-to-microservices migration is complex, but provides value across multiple dimensions. These include: Development velocity Scalability Cost-of-change reduction Ability to build multiple microservice databases concurrently One common approach for teams adopting and building out microservices is to use domain driven design to break down the overall business domain into bounded contexts first. They also often use the Strangler Fig pattern to reduce the overall risk, migrate incrementally, and then decommission the monolith once all required functionality is migrated. While most teams find this approach works well for the application code, it’s particularly challenging to break down monolithic databases into databases that meet the specific needs of each microservice. There are several factors to consider during transition: Duration. How long will the transition to microservices take? Data synchronization. How much and what types of data need to be synchronized between monolith and microservice databases? Data translation in a heterogeneous schema environment. How are the same data elements processed and stored differently? Synchronization cadence. How much data needs syncing, and how often (real-time, nightly, etc.)? Data anti-corruption layer. How do you ensure the integrity of transaction data, and prevent the new data from corrupting the old? Simplifying Migration to the Cloud Created by PeerIslands and MongoDB, 1Data helps enterprises address the challenges detailed above. Migrate and synchronize your data with confidence with 1Data Schema migration tool. Convert legacy DB schema and related components automatically to your target MongoDB instance. Use the GUI-based data mapper to track errors. Real-time data sync pipeline. Sync data between monolith and microservice databases nearly in real time with enterprise grade components. Conditional data sync. Define how to slice the data you’re planning to sync. Data cleansing. Translate data as it’s moved. DSLs for data transformation. Apply domain-specific business rules for the MongoDB documents you want to create from your various aggregated source system tables. This layer also acts as an anti-corruption layer. Data auditing. Independently verify data sync between your source and target systems. Go beyond the database. Synchronize data from APIs, Webhooks & Events. Bidirectional data sync. Replicate key microservice database updates back to the monolithic database as needed. Get Started with Real-Time Data Synchronization With the initial version of 1Data, PeerIslands addresses the core functionality of real-time data sync between source and target systems. Here’s a view of the logical architecture: Source System. The source system can be a relational database like Oracle, where we’ll rely on CDC, or other sources like Events, API, or Webhooks. **Data Capture & Streaming.**Captures the required data from the source system and converts them into data streams using either off-the-shelf DB connectors or custom connectors, depending on the source type. 1Data implements data sharding and throttling, which enable data synchronization at scale, in this phase. Data Transformation. The core of the accelerator, when we convert the source data streams into target MongoDB document schemas. We use LISP-based Domain Specific Language to enable simple, rule-based data transformation, including user-defined rules. Data Sink & Streaming. Captures the data streams that need to be updated into the MongoDB database through stream consumers. The actual update into the target DB is done through sink connectors. Target system. The MDB database used by the microservices. Auditing. Most data that gets migrated is enterprise-critical; 1Data audits the entire data synchronization process for missed data and incorrect updates. Two-way sync. The logical architecture enables data synchronization from the MongoDB database back to the source database. We used MongoDB, Confluent Kafka and Debezium to implement this initial version of 1Data: The technical architecture is cloud agnostic, and can be deployed on-prem as well. We’ll be customizing it for key cloud platforms as well as fleshing out specific architectures to adopt for common data sync scenarios. Conclusion The 1Data solution accelerator lends itself to multiple use cases, from single view to legacy modernization. Please reach out to us for technical details and implementation assistance, and watch this space as we develop the 1Data accelerator further.

October 15, 2020
Updates

Announcing Azure Private Link Integration for MongoDB Atlas

We’re excited to announce the general availability of Azure Private Link as a new network access management option in MongoDB Atlas . MongoDB Atlas is built to be secure by default . All dedicated Azure clusters on Atlas are deployed in their own VNET. For network security controls, you already have the options of an IP Access List and VNET Peering . The IP Access List in Atlas offers a straightforward and secure connection mechanism, and all traffic is encrypted with end-to-end TLS. But it requires that you provide static public IPs for your application servers to connect to Atlas, and to list all such IPs in the Access List. And if your applications don’t have static public IPs or if you have strict requirements on outbound database access via public IPs, this won’t work for you. The existing solution to this is VNET Peering, with which you configure a secure peering connection between your Atlas cluster’s VNET and your own VNET(s). This is easy, but the connections are two way. While Atlas never has to initiate connections to your environment, some customers perceive VNET peering as extending the perceived network trust boundary anyway. Although Access Control Lists (ACLs) and security groups can control this access, they require additional configuration. MongoDB Atlas and Azure Private Link Now, you can use Azure Private Link to connect a VNET to MongoDB Atlas. This brings two major advantages: Unidirectional: connections via Private Link use a private IP within the customer’s VNET, and are unidirectional such that the Atlas VNET cannot initiate connections back to the customer's VNET. Hence, there is no extension of the network trust boundary. Transitive: connections to the Private Link private IPs within the customer’s VNET can come transitively from another VNET peered to the Private Link-enabled VNET, or from an on-prem data center connected with ExpressRoute to the Private Link-enabled VNET. This means that customers can connect directly from their on-prem data centers to Atlas without using public IP Access Lists. Azure PrivateLink offers a one-way network peering service between an Azure VNET and a MongoDB Atlas VNET Meeting Security Requirements with Atlas on Azure Azure Private Link adds to the security capabilities that are already available in MongoDB Atlas, like Client Side Field-Level Encryption , database auditing , BYO key encryption with Azure Key Vault integration , federated identity , and more. MongoDB Atlas undergoes independent verification of security and compliance controls , so you can be confident in using Atlas on Azure for your most critical workloads. Ready to try it out? Get started with MongoDB Atlas today! Sign up now

October 15, 2020
Updates

Ready to get Started with MongoDB Atlas?

Start Free