Announcing MongoDB Relational Migrator
We’re thrilled to announce a new tool: MongoDB Relational Migrator . Relational Migrator simplifies the process of moving workloads from relational databases to MongoDB. We’ve heard it from more of our customers than we can count: organizations want to replatform existing applications from relational databases to MongoDB. MongoDB is more intuitive, more flexible, and more scalable than relational databases. Customers tell us that they need to move away from a relational backend in order to build new functionality into existing applications with increased agility, to make new and better use of enterprise data, or to scale existing services to volumes or usage patterns that they were never designed to handle. While some customers have successfully migrated some of their relational workloads to MongoDB, many have struggled with how to approach this challenge. Requirements vary. Can we decommission the old database, or does it need to stay running? Is this a wholesale replatforming, or are we carving out pieces of functionality to move to MongoDB? Some customers end up using a variety of ETL, CDC, message queue, streaming, pub/sub, or other technology to move data into MongoDB, but others have decided it’s just too difficult. It’s also important to think carefully about data modeling as part of a migration. Though it’s possible to naively move a relational schema into MongoDB without any changes, that won’t deliver many of MongoDB’s benefits. A better practice is to design a new and better MongoDB schema that’s more denormalized and potentially to take the opportunity to revise the architecture of the application as well. We want to make this process easier, which is why we’re developing MongoDB Relational Migrator. Relational Migrator streamlines the process of moving to MongoDB from a relational database and is compatible with Oracle, Microsoft SQL Server, MySQL, and PostgreSQL. Migrator connects to a relational database to analyze its existing schema, then helps architects design and map to a new MongoDB schema. When you’re ready, Migrator will perform the data migration from the source RDBMS to MongoDB. Migration can be a one-shot migration if you’re prepared for a hard cutover; soon, we will also support a continuous sync if you need to leave the source system running and continue pushing changes into MongoDB. We know that moving long-running systems to MongoDB still isn’t as simple as pushing a button, which is why Relational Migrator is designed to be used with assistance from our Field Engineering teams. For example, as part of a consulting engagement with MongoDB, a consulting engineer can help you evaluate which applications are the best candidates for migration, design and implement a new MongoDB backend, and execute the migration. Relational Migrator will significantly lower the effort and risk in transforming and replicating your data, leaving more time to focus on other aspects of application modernization. If you’ve been trying to figure out how to get off of a relational database, get in touch to learn more about MongoDB Relational Migrator.
Announcing GA of the MongoDB Atlas Operator for Kubernetes
We’re excited to announce the general availability of the Atlas Kubernetes Operator , the best way to use MongoDB with Kubernetes. The Atlas Kubernetes Operator makes it easy to deploy, manage, and access MongoDB Atlas from your preferred Kubernetes distribution. When the operator is installed into your Kubernetes environment, it exposes Kubernetes custom resources to fully manage projects, deployments (clusters and serverless instances), network access (IP Access Lists and Private Endpoints), database users, backup, and more. For a full list of capabilities, check out the Atlas Operator documentation . The Atlas Operator is designed to Kubernetes standards. It’s open source and built with the CNCF Operator Framework, so you can have confidence that it will work with your Kubernetes environment. The Operator supports any Certified Kubernetes Distribution and is OpenShift-certified . With the Operator, you can easily manage your Atlas resources directly from Kubernetes, using the Kubernetes API. This means no switching between systems: you can manage your containerized applications and the data layer powering them from a single control plane. This also makes it easy to integrate Atlas into your Kubernetes-native CI/CD pipelines, automatically setting up and tearing down infrastructure as part of your deployment process. Why Kubernetes and MongoDB Atlas? Atlas is a multi-cloud document database that provides the versatility you need to build sophisticated and resilient applications. It has built-in high availability, is easily scalable, and is flexible enough to support rapid iteration and shipping of new application features. This makes it a great fit for the modern development and deployment practices that containerization and Kubernetes support. It’s also incredibly simple to deploy multi-cloud clusters or move between clouds on Atlas — a good match for the portability that containers provide. Learn more about the Atlas Operator for Kubernetes or get going right away with the Atlas Operator Quick Start .
Introducing Pay as You Go MongoDB Atlas on AWS Marketplace
We’re excited to introduce a new way of paying for MongoDB Atlas . AWS customers can now pay Atlas charges via our new AWS Marketplace listing . Through this listing, individual developers can enjoy a simplified payment experience via their AWS accounts, while enterprises now have another way to procure MongoDB in addition to privately negotiated offers, already supported via AWS Marketplace. Previously, customers who wanted to pay via AWS Marketplace had to commit to a certain level of usage upfront. Pay as you go is available directly in Atlas via credit card, PayPal, and invoice — but not in AWS Marketplace, until today. With this new listing and integration, you can pay via AWS with no upfront commitments . Simply subscribe via AWS Marketplace and start using Atlas. You can get started for free with Atlas’s free-forever tier , then scale as needed. You’ll be charged in AWS only for the resources you use in Atlas, with no payment minimum. Deploy, scale, and tear down resources in Atlas as needed; you’ll pay just for the hours that you’re using them. Atlas comes with a Basic Support Plan via in-app chat. If you want to upgrade to another Atlas support plan , you can do so in Atlas. Usage and support costs will be billed together to your AWS account daily. If you’re connecting Atlas to applications running in AWS, or integrating with other AWS services , you’ll be able to see all your costs in one place in your AWS account. To get started with Atlas via AWS Marketplace, visit our Marketplace listing and subscribe using your account. You’ll then be prompted to either sign in to your existing Atlas account or sign up for a new Atlas account . Try MongoDB Atlas for Free Today!
Announcing Google Private Service Connect (PSC) Integration for MongoDB Atlas
We’re excited to announce the general availability of Google Cloud Private Service Connect (PSC) as a new network access management option in MongoDB Atlas . Announced alongside the availability of MongoDB 5.1 , Google Cloud PSC is GA for use with Altas. See the documentation for instructions on setting up Google Cloud PSC for Atlas, or read on for more information. MongoDB Atlas is secure by default . All dedicated Google Cloud clusters on Atlas are deployed in their own VPC. To set up network security controls, Atlas customers already have the options of an IP Access List and VPC Peering . The IP Access List in Atlas is a straightforward and secure connection mechanism, and all traffic is encrypted with end-to-end TLS. But you must be able to provide static public IPs for your application servers to connect to Atlas, and to list those IPs in the Access List. If your applications don’t have static public IPs or if you have strict requirements on outbound database access via public IPs, this won’t work for you. The existing solution to this is VPC Peering, which allows you to configure a secure peering connection between your Atlas cluster’s VPC and your own Google Cloud VPC(s). This is easy, but the connections are two way. Atlas never has to initiate connections to your environment, but some Atlas users don’t want to use VPC peering because it extends the perceived network trust boundary. Access Control Lists (ACLs) and IAM Groups can control this access, but they require additional configuration. MongoDB Atlas and Google Cloud PSC Now, you can use Google Cloud Private Service Connect to connect a VPC to MongoDB Atlas. Private Service Connect allows you to create private and secure connections from your Google Cloud networks to MongoDB Atlas. It creates service endpoints in your VPCs that provide private connectivity and policy enforcement, allowing you to easily control network security in one place. This brings two major advantages: Unidirectional: connections via PSC use a private IP within the customer’s VPC, and are unidirectional. Atlas cannot initiate connections back to the customer's VPC. This means that there is no extension of the perceived network trust boundary. Transitive: connections to the PSC private IPs within the customer’s VPC can come transitively from an on-prem data center connected to the PSC-enabled VPC with Cloud VPN . Customers can connect directly from their on-prem data centers to Atlas without using public IP Access Lists. Google Cloud Private Service Connect offers a one-way network peering service between a Google Cloud VPC and a MongoDB Atlas VPC Meeting security requirements with Atlas on Google Cloud Google Cloud PSC adds to the security capabilities that are already available in MongoDB Atlas, like Client Side Field-Level Encryption , database auditing , BYO key encryption with Google Cloud KMS integration , federated identity , and more. MongoDB Atlas undergoes independent verification of security and compliance controls , so you can be confident in using Atlas on Google Cloud for your most critical workloads. To learn more about configuring Google PSC with MongoDB Atlas, visit our docs . If you’re already managing your Atlas clusters with our API, you can add a private endpoint with the documentation here . For more information about Google Cloud Private Service Connect, visit the Google Cloud docs or read the Introducing Private Service Connect release announcement. Try MongoDB Atlas for free today!
MongoDB Atlas for Government Achieves "FedRAMP In-process"
We are pleased to announce that MongoDB Atlas for Government has achieved the FedRAMP designation of “ In-process ”. This status reflects MongoDB’s continued progress toward a FedRAMP Authorized modern data platform for the US Government. Earlier this year, MongoDB Atlas for Government achieved the designation of FedRAMP Ready . MongoDB is widely used across the Federal Government, including the Department of Veterans Affairs, the Department of Health & Human Services (HHS), the General Services Administration, and others. HHS is also sponsoring the FedRAMP authorization process for MongoDB. What is MongoDB Atlas for Government? MongoDB Atlas for Government is an independent environment of our flagship cloud product MongoDB Atlas. Atlas for Government has been built for US government needs. It allows federal, state, and local governments as well as educational institutions to build and iterate faster using a modern database-as-a-service platform. The service is available in AWS GovCloud (US) and AWS US East/West regions. MongoDB Atlas for Government Highlights: Atlas for Government clusters can be created in AWS GovCloud East/West or AWS East/West regions. Atlas for Government clusters can span regions within AWS GovCloud or within AWS. Atlas core features such as automated backups, AWS PrivateLink, AWS KMS, federated authentication, Atlas Search, and more are fully supported Applications can use client-side field level encryption with AWS KMS in GovCloud or AWS East/West. Getting started and pricing MongoDB Atlas for Government is available to Government customers or companies that sell to the US Government. You can buy Atlas for Government through AWS GovCloud or the AWS marketplace . Please fill out this form and a representative will get in touch with you. To learn more about Atlas for Government, visit the product page , check out the documentation , or read the FedRAMP FAQ .
MongoDB Atlas for Government
We are pleased to announce the general availability of MongoDB Atlas for Government, which is an independent environment of our flagship cloud product MongoDB Atlas that’s built for US government needs. It will allow federal, state, and local governments as well as educational institutions to build and iterate faster using a modern database-as-a-service platform. The service is available in AWS GovCloud (US) and AWS US East/West regions. We are also pleased to announce that MongoDB Atlas for Government has been approved as FedRAMP Ready . FedRAMP Ready indicates that a third-party assessment organization has vouched for a cloud service provider’s security capabilities, and the FedRAMP PMO has reviewed and approved the Readiness Assessment Report. MongoDB Atlas for Government Highlights: Atlas for Government clusters can be created in AWS GovCloud East/West or AWS East/West regions. Atlas for Government clusters can span regions within AWS GovCloud or within AWS (but not across those two environments). Atlas core features such as automated backups, AWS PrivateLink, AWS KMS, federated authentication, Atlas Search, and more are fully supported Applications can use client-side field level encryption with AWS KMS in GovCloud or AWS East/West. Getting Started and Pricing: MongoDB Atlas for Government is available to Government customers or companies that sell to the US Government. You can buy Atlas for Government through AWS GovCloud or AWS marketplace . Of course, you can also work directly with MongoDB; please fill out this form and a representative will get in touch with you. To learn more about Atlas for Government, visit the product page , check out the documentation , or read the FedRAMP FAQ .
Flowhub Relies on MongoDB to Meet Changing Regulations and Scale Its Business
Showingly Transforms Real Estate with MongoDB Atlas and MongoDB Realm
MongoDB Atlas Arrives in Italy | MongoDB Atlas Arriva in Italia
We’re delighted to announce our first foray into Italy with the launch of MongoDB Atlas on the AWS Europe (Milan) region. MongoDB Atlas is now available in 20 AWS regions around the world, including 6 European regions. Milan is a Recommended Region , meaning it has three Availability Zones (AZ). When you deploy a cluster in Milan, Atlas automatically distributes replicas to the different AZs for higher availability — if there’s an outage in one zone, the Atlas cluster will automatically fail over to keep running in the other two. And you can also deploy multi-region clusters with the same automatic failover built-in. We’re excited that, like customers in France, Germany, the UK, and more, Italian organizations will now be able to keep data in-country, delivering low-latency performance and ensuring confidence in data locality. We’re confident our Italian customers in government, financial services, and utilities in particular will appreciate this capability as they build tools to improve citizens’ lives and better serve their local users. Explore Atlas on AWS Today In Italian, courtesy of Dominic: Siamo lieti di annunciare la nostra espansione in Italia rendendo disponibile MongoDB Atlas nella regione AWS Europa (Milano). MongoDB Atlas è ora disponibile in 20 regioni AWS nel mondo, comprese 6 regioni europee. Milano è una Recommended Region ; questo significa che ha tre Availability Zones (AZ). Quando viene creato un cluster a Milano, Atlas distribuisce automaticamente le repliche sulle diverse AZ per aumentare la disponibilità e l’affidabilità — nel caso in cui avvenga un disservizio in una zona, il cluster Atlas utilizzerà la funzionalità di failover per restare in esecuzione sulle altre due. Eventualmente è anche possibile creare cluster multi-region che incorporano la stessa logica di failover automatico. Siamo felici che anche le realtà italiane possano scegliere, come i nostri clienti in Francia, Germania, UK, ed altrove, di mantenere i propri dati all’interno dei confini nazionali, dando risposte a bassa latenza ai propri utenti ed assicurando loro la fiducia nella localizzazione fisica dei dati. Siamo sicuri che i nostri clienti in Italia, in particolare nel settore pubblico, nei servizi finanziari, e nelle utilities, apprezzeranno queste nuove possibilità per la creazione di nuovi strumenti per migliorare la vita dei cittadini e servire meglio i loro utenti in Italia. Scopri subito Atlas disponiblie su AWS
Announcing Azure Private Link Integration for MongoDB Atlas
We’re excited to announce the general availability of Azure Private Link as a new network access management option in MongoDB Atlas . MongoDB Atlas is built to be secure by default . All dedicated Azure clusters on Atlas are deployed in their own VNET. For network security controls, you already have the options of an IP Access List and VNET Peering . The IP Access List in Atlas offers a straightforward and secure connection mechanism, and all traffic is encrypted with end-to-end TLS. But it requires that you provide static public IPs for your application servers to connect to Atlas, and to list all such IPs in the Access List. And if your applications don’t have static public IPs or if you have strict requirements on outbound database access via public IPs, this won’t work for you. The existing solution to this is VNET Peering, with which you configure a secure peering connection between your Atlas cluster’s VNET and your own VNET(s). This is easy, but the connections are two way. While Atlas never has to initiate connections to your environment, some customers perceive VNET peering as extending the perceived network trust boundary anyway. Although Access Control Lists (ACLs) and security groups can control this access, they require additional configuration. MongoDB Atlas and Azure Private Link Now, you can use Azure Private Link to connect a VNET to MongoDB Atlas. This brings two major advantages: Unidirectional: connections via Private Link use a private IP within the customer’s VNET, and are unidirectional such that the Atlas VNET cannot initiate connections back to the customer's VNET. Hence, there is no extension of the network trust boundary. Transitive: connections to the Private Link private IPs within the customer’s VNET can come transitively from another VNET peered to the Private Link-enabled VNET, or from an on-prem data center connected with ExpressRoute to the Private Link-enabled VNET. This means that customers can connect directly from their on-prem data centers to Atlas without using public IP Access Lists. Azure PrivateLink offers a one-way network peering service between an Azure VNET and a MongoDB Atlas VNET Meeting Security Requirements with Atlas on Azure Azure Private Link adds to the security capabilities that are already available in MongoDB Atlas, like Client Side Field-Level Encryption , database auditing , BYO key encryption with Azure Key Vault integration , federated identity , and more. MongoDB Atlas undergoes independent verification of security and compliance controls , so you can be confident in using Atlas on Azure for your most critical workloads. Ready to try it out? Get started with MongoDB Atlas today! Sign up now
Fraud Detection at FICO with MongoDB and Microservices
FICO is more than just the FICO credit score. Founded in 1956, FICO also offers analytics applications for customer acquisition, service, and security, plus tools for decision management. One of those applications is the Falcon Assurance Navigator (FAN), a fraud detection system that monitors purchasing and expenses through the full procurement to pay cycle. Consider an expense report: the entities involved include the reporter, the approver, the vendor, the department or business unit, the expense line items, and more. A single report has multiple line items, where each line may be broken into different expense codes, different budget sources, and so on. This translates into a complicated data model that can be nested 6 or 7 layers deep – a great match for MongoDB’s document model, but quite hard to represent in the tabular model of relational databases. FAN Architecture Overview The fraud detection engine consists of a series of microservices ("Introduction to Microservices and MongoDB") that operate on transactions in queues that are persisted in MongoDB: Each transaction arrives in a receiver service , which places it into a queue. An attachment processor service checks for an attachment; if one exists, it sends it to an OCR service and stores the transaction enriched with the OCR data. A context creator service analyzes it and associates it with any past transactions that are related to it. A decision execution engine runs the rules that have been set up by the client and identifies violations. One or more analytics engines review transactions and flag outliers. Now decorated with a score, the transaction goes to a case manager service , which decides whether to create a case for human follow-up based on any identified issues. At the same time, a notification manager passes updates on the processing of each transaction back to the client’s expense/procurement system. To learn more, watch FICO’s presentation at MongoDB World 2018 .
Leaf in the Wild: SilkRoute Chooses MongoDB Over SQL Server for Critical Quality Assurance Platform
Leaf in the Wild posts highlight real world MongoDB deployments. Read other stories about how companies are using MongoDB for their mission-critical projects. MongoDB chosen for development productivity, operational efficiency with Cloud Manager, and “truly outstanding” professional services From manufacturing to retail, every part of the supply chain is starting to see the value of data. Whether it’s developing IoT quality assurance applications in manufacturing to ensure your products are defect-free or building data-driven customer loyalty programs so that brands can connect with and reward their fans, the top companies are working to improve their approach to data. SilkRoute Global is a software-as-a-service company focused on this industry. Its analytics products automate processes and present consumable, useful information to its customers. To understand the benefits they get from MongoDB, I spoke with Devin Duden, CTO of OmniSky (a division of SilkRoute) & Senior Software Engineer, and Amjad Hussain, CEO & Chief Data Scientist. Tell us a little bit about SilkRoute. SilkRoute is a passionate team of designers, machine learning scientists, and software engineers with tremendous industry knowledge of manufacturing, distribution, and retail. We live for solving big problems. Our industry-specific predictive and prescriptive analytics platform creates immense operational and strategic value for our customers. Our customer footprint is global and growing. Applied machine learning, business process automation, and mobility are woven into the fabric of everything we build. We offer a unique risk-free rapid implementation and integration approach for our customers to enjoy our solutions. Please describe how you’re using MongoDB. The application SilkRoute is building is a mobile application performing RFID inspections on industrial manufactured products. The application provides a centralized data store of customers’ products and the inspections associated with a product, and allows those customers to easily share the inspection records with others. MongoDB was chosen for this application based on: Simplified schema design Increased flexibility for modeling complex relationships (e.g., using MongoDB eliminated recursive relationships necessary in a SQL-based solution) Easier capture of user generated data Reduced development timeline Durability, scalability, and disaster recovery SilkRoute Enterprise mobile RFID inspection architecture What were you using before MongoDB? Was this a new project or did you migrate from a different database? The current version of the application is a client-server implementation using SQL Server as a cloud sharing data store and Windows CE on the mobile device. The application is a rewrite. How did you hear about MongoDB? Did you consider other alternatives, like relational or NoSQL databases? I was introduced to MongoDB three years ago when I started working at SilkRoute. We were working on a social network at the time, which was using MongoDB as its primary data store. The RFID mobile application’s technical requirements were originally to use MS SQL Server. This technical requirement was provided by the client. During our working Joint Application Design session with the client, we suggested using MongoDB, but didn’t make headway. When we attended MongoDB World 2015 , we gathered enough details about MongoDB’s capabilities, along with real-world examples of high-volume, transaction-based applications being developed on MongoDB, that we were able to persuade the client to switch from SQL Server to MongoDB. Please describe your MongoDB deployment, technology stack, and the version of MongoDB that you are running. The MongoDB deployment is a 5 node replica set using Cloud Manager for operational management and deployment. The replica set is deployed in the US East AWS region across all availability zones. At this point, we have not implemented sharding. The MongoDB replica set has been deployed in AWS following MongoDB’s best practices using Amazon Linux AMIs. Each production node will be running on EC2 instances with 16 GB memory and 4 core CPUs, with three 100GB provisioned IOPS EBS volumes. Each volume is XFS format. One volume is mapped for “data”, one volume is mapped for “log”, and one volume is mapped for “journal”. The API stack is written in .NET 5 using C# MVC/Web API framework. We are using the MongoDB .NET driver version 2.0. Are you using any tools to monitor, manage and backup your MongoDB deployment? If so what? Do you use Ops Manager / Cloud Manager? The replica set has been deployed and managed using Cloud Manager. Cloud Manager simplified and streamlined replica set deployment and operations. This solution is the first time the majority of team members used MongoDB. To reduce time spent with MongoDB replica set deployment and configuration, Cloud Manager was a great fit. Following Cloud Manager’s directions to create AWS EC2 instances made it very easy for us to create images, and build/tear down replica sets quickly. Streamlining manual tasks allowed the team to focus more time on development than deploying a fully managed MongoDB replica set. In addition to Cloud Manager, the team just started using MongoDB Compass to analyze collections and document sizes. Are you integrating MongoDB with other data analytics, BI or visualization tools? If so, can you share any details? At this point we have not integrated any BI. One of our objectives is to connect with the client’s BI system using the MongoDB Connector for BI and/or extract data from a tagged node to hydrate a SQL-based BI system. We’re planning to perform a POC on the Connector for BI, now that it has been released. How are you measuring the impact of MongoDB on your business? SilkRoute measures MongoDB’s impact by many factors, including ease of use with deployments, a code first approach, increased agile development model, reduced total cost of ownership, and reduced time to market. The ease of deployments reduces or eliminates maintenance windows when spinning up a replica set or upgrading database versions, which means higher uptime for customers and less productive time eaten up for developers. A code first approach adds to increased savings by eliminating daunting DDL script management and aids with better agile development. These factors result in reduced total cost of ownership and faster time to market. Do you use any commercial subscriptions or services to support your MongoDB deployment? SilkRoute is a MongoDB OEM partner. For the RFID application we will be embedding MongoDB Enterprise Server 3.2 and managing the deployment with Cloud Manager. We allocated a budget for MongoDB’s professional services in the early stages of the project. The professional services were tailored to the team’s skill set and agenda. With two separate onsite sessions, we covered topics from deployment, management, and recovery using Cloud Manager, to schema modeling and scaling. The value gained working hands-on directly with a MongoDB consulting engineer was twice the investment. During one session, we encountered a disaster recovery situation in a non-production environment. Unexpected though the situation was, I personally gained the most from the experience of working through the issue with a MongoDB expert in a very collaborative fashion. The professionalism and knowledge of our MongoDB consulting engineer was truly outstanding. Do you have plans to use MongoDB for other applications? If so, which ones? Yes, both internal initiatives and client initiatives. These include BI, a Warehouse Manager SaaS solution, a customer loyalty/couponing app, and client SaaS solutions, which we are not at liberty to disclose at this point. We would prefer to use MongoDB for all application and system development projects. Our preference to use MongoDB for development is based on ease of use, an emphasis on a code first approach for projects going forward, and built-in scalability and durability. Have you upgraded to MongoDB 3.2? What most excites you about this release? We’ve been developing the solution using MongoDB 3.0.x. We are actively migrating the database to version 3.2.1, and the production deployment will use 3.2.1. The most exciting features of MongoDB 3.2 for us are the BI connector, document validation, $lookup, and WiredTiger's in-memory option. We feel the biggest value add to our clients are the BI connector and the in-memory storage engine. The BI connector will allow our clients’ BI environments to integrate directly with the solution we are building, eliminating the need to write ETL processes from MongoDB to a BI environment. The in-memory storage engine will increase performance with read operations, which will reduce latency with API requests. Anything to increase overall performance is a plus. What advice would you give someone who is considering using MongoDB for their next project? I would highly recommend allocating a budget for MongoDB’s professional services to help with operations, deployment, and schema modeling. The value gained with their best practices approach really reduces learning curves and POC time. Coming from a SQL world, prepare ERDs and break the ERDs into schema designs. This approach will help bridge team members from a relational to a non-relational data store. Take a top-down development approach as it will uncover access patterns that may help with schema modeling. Thank you for sharing your MongoDB experiences with us! If you’re comparing MongoDB with relational databases, read our RDBMS to MongoDB Migration Guide to learn more. Read the RDBMS to MongoDB Migration Guide About the Author - Eric Holzhauer Eric is a Product Marketing Manager at MongoDB.