MongoDB Atlas for Government
We are pleased to announce the general availability of MongoDB Atlas for Government, which is an independent environment of our flagship cloud product MongoDB Atlas that’s built for US government needs. It will allow federal, state, and local governments as well as educational institutions to build and iterate faster using a modern database-as-a-service platform. The service is available in AWS GovCloud (US) and AWS US East/West regions. We are also pleased to announce that MongoDB Atlas for Government has been approved as FedRAMP Ready . FedRAMP Ready indicates that a third-party assessment organization has vouched for a cloud service provider’s security capabilities, and the FedRAMP PMO has reviewed and approved the Readiness Assessment Report. MongoDB Atlas for Government Highlights: Atlas for Government clusters can be created in AWS GovCloud East/West or AWS East/West regions. Atlas for Government clusters can span regions within AWS GovCloud or within AWS (but not across those two environments). Atlas core features such as automated backups, AWS PrivateLink, AWS KMS, federated authentication, Atlas Search, and more are fully supported Applications can use client-side field level encryption with AWS KMS in GovCloud or AWS East/West. Getting Started and Pricing: MongoDB Atlas for Government is available to Government customers or companies that sell to the US Government. You can buy Atlas for Government through AWS GovCloud or AWS marketplace . Of course, you can also work directly with MongoDB; please fill out this form and a representative will get in touch with you. To learn more about Atlas for Government, visit the product page , check out the documentation , or read the FedRAMP FAQ .
Flowhub Relies on MongoDB to Meet Changing Regulations and Scale Its Business
Showingly Transforms Real Estate with MongoDB Atlas and MongoDB Realm
MongoDB Atlas Arrives in Italy | MongoDB Atlas Arriva in Italia
We’re delighted to announce our first foray into Italy with the launch of MongoDB Atlas on the AWS Europe (Milan) region. MongoDB Atlas is now available in 20 AWS regions around the world, including 6 European regions. Milan is a Recommended Region , meaning it has three Availability Zones (AZ). When you deploy a cluster in Milan, Atlas automatically distributes replicas to the different AZs for higher availability — if there’s an outage in one zone, the Atlas cluster will automatically fail over to keep running in the other two. And you can also deploy multi-region clusters with the same automatic failover built-in. We’re excited that, like customers in France, Germany, the UK, and more, Italian organizations will now be able to keep data in-country, delivering low-latency performance and ensuring confidence in data locality. We’re confident our Italian customers in government, financial services, and utilities in particular will appreciate this capability as they build tools to improve citizens’ lives and better serve their local users. Explore Atlas on AWS Today In Italian, courtesy of Dominic: Siamo lieti di annunciare la nostra espansione in Italia rendendo disponibile MongoDB Atlas nella regione AWS Europa (Milano). MongoDB Atlas è ora disponibile in 20 regioni AWS nel mondo, comprese 6 regioni europee. Milano è una Recommended Region ; questo significa che ha tre Availability Zones (AZ). Quando viene creato un cluster a Milano, Atlas distribuisce automaticamente le repliche sulle diverse AZ per aumentare la disponibilità e l’affidabilità — nel caso in cui avvenga un disservizio in una zona, il cluster Atlas utilizzerà la funzionalità di failover per restare in esecuzione sulle altre due. Eventualmente è anche possibile creare cluster multi-region che incorporano la stessa logica di failover automatico. Siamo felici che anche le realtà italiane possano scegliere, come i nostri clienti in Francia, Germania, UK, ed altrove, di mantenere i propri dati all’interno dei confini nazionali, dando risposte a bassa latenza ai propri utenti ed assicurando loro la fiducia nella localizzazione fisica dei dati. Siamo sicuri che i nostri clienti in Italia, in particolare nel settore pubblico, nei servizi finanziari, e nelle utilities, apprezzeranno queste nuove possibilità per la creazione di nuovi strumenti per migliorare la vita dei cittadini e servire meglio i loro utenti in Italia. Scopri subito Atlas disponiblie su AWS
Announcing Azure Private Link Integration for MongoDB Atlas
We’re excited to announce the general availability of Azure Private Link as a new network access management option in MongoDB Atlas . MongoDB Atlas is built to be secure by default . All dedicated Azure clusters on Atlas are deployed in their own VNET. For network security controls, you already have the options of an IP Access List and VNET Peering . The IP Access List in Atlas offers a straightforward and secure connection mechanism, and all traffic is encrypted with end-to-end TLS. But it requires that you provide static public IPs for your application servers to connect to Atlas, and to list all such IPs in the Access List. And if your applications don’t have static public IPs or if you have strict requirements on outbound database access via public IPs, this won’t work for you. The existing solution to this is VNET Peering, with which you configure a secure peering connection between your Atlas cluster’s VNET and your own VNET(s). This is easy, but the connections are two way. While Atlas never has to initiate connections to your environment, some customers perceive VNET peering as extending the perceived network trust boundary anyway. Although Access Control Lists (ACLs) and security groups can control this access, they require additional configuration. MongoDB Atlas and Azure Private Link Now, you can use Azure Private Link to connect a VNET to MongoDB Atlas. This brings two major advantages: Unidirectional: connections via Private Link use a private IP within the customer’s VNET, and are unidirectional such that the Atlas VNET cannot initiate connections back to the customer's VNET. Hence, there is no extension of the network trust boundary. Transitive: connections to the Private Link private IPs within the customer’s VNET can come transitively from another VNET peered to the Private Link-enabled VNET, or from an on-prem data center connected with ExpressRoute to the Private Link-enabled VNET. This means that customers can connect directly from their on-prem data centers to Atlas without using public IP Access Lists. Azure PrivateLink offers a one-way network peering service between an Azure VNET and a MongoDB Atlas VNET Meeting Security Requirements with Atlas on Azure Azure Private Link adds to the security capabilities that are already available in MongoDB Atlas, like Client Side Field-Level Encryption , database auditing , BYO key encryption with Azure Key Vault integration , federated identity , and more. MongoDB Atlas undergoes independent verification of security and compliance controls , so you can be confident in using Atlas on Azure for your most critical workloads. Ready to try it out? Get started with MongoDB Atlas today! Sign up now
Fraud Detection at FICO with MongoDB and Microservices
FICO is more than just the FICO credit score. Founded in 1956, FICO also offers analytics applications for customer acquisition, service, and security, plus tools for decision management. One of those applications is the Falcon Assurance Navigator (FAN), a fraud detection system that monitors purchasing and expenses through the full procurement to pay cycle. Consider an expense report: the entities involved include the reporter, the approver, the vendor, the department or business unit, the expense line items, and more. A single report has multiple line items, where each line may be broken into different expense codes, different budget sources, and so on. This translates into a complicated data model that can be nested 6 or 7 layers deep – a great match for MongoDB’s document model, but quite hard to represent in the tabular model of relational databases. FAN Architecture Overview The fraud detection engine consists of a series of microservices that operate on transactions in queues that are persisted in MongoDB: Each transaction arrives in a receiver service , which places it into a queue. An attachment processor service checks for an attachment; if one exists, it sends it to an OCR service and stores the transaction enriched with the OCR data. A context creator service analyzes it and associates it with any past transactions that are related to it. A decision execution engine runs the rules that have been set up by the client and identifies violations. One or more analytics engines review transactions and flag outliers. Now decorated with a score, the transaction goes to a case manager service , which decides whether to create a case for human follow-up based on any identified issues. At the same time, a notification manager passes updates on the processing of each transaction back to the client’s expense/procurement system. To learn more, watch FICO’s presentation at MongoDB World 2018 .
Leaf in the Wild: SilkRoute Chooses MongoDB Over SQL Server for Critical Quality Assurance Platform
Leaf in the Wild posts highlight real world MongoDB deployments. Read other stories about how companies are using MongoDB for their mission-critical projects. MongoDB chosen for development productivity, operational efficiency with Cloud Manager, and “truly outstanding” professional services From manufacturing to retail, every part of the supply chain is starting to see the value of data. Whether it’s developing IoT quality assurance applications in manufacturing to ensure your products are defect-free or building data-driven customer loyalty programs so that brands can connect with and reward their fans, the top companies are working to improve their approach to data. SilkRoute Global is a software-as-a-service company focused on this industry. Its analytics products automate processes and present consumable, useful information to its customers. To understand the benefits they get from MongoDB, I spoke with Devin Duden, CTO of OmniSky (a division of SilkRoute) & Senior Software Engineer, and Amjad Hussain, CEO & Chief Data Scientist. Tell us a little bit about SilkRoute. SilkRoute is a passionate team of designers, machine learning scientists, and software engineers with tremendous industry knowledge of manufacturing, distribution, and retail. We live for solving big problems. Our industry-specific predictive and prescriptive analytics platform creates immense operational and strategic value for our customers. Our customer footprint is global and growing. Applied machine learning, business process automation, and mobility are woven into the fabric of everything we build. We offer a unique risk-free rapid implementation and integration approach for our customers to enjoy our solutions. Please describe how you’re using MongoDB. The application SilkRoute is building is a mobile application performing RFID inspections on industrial manufactured products. The application provides a centralized data store of customers’ products and the inspections associated with a product, and allows those customers to easily share the inspection records with others. MongoDB was chosen for this application based on: Simplified schema design Increased flexibility for modeling complex relationships (e.g., using MongoDB eliminated recursive relationships necessary in a SQL-based solution) Easier capture of user generated data Reduced development timeline Durability, scalability, and disaster recovery SilkRoute Enterprise mobile RFID inspection architecture What were you using before MongoDB? Was this a new project or did you migrate from a different database? The current version of the application is a client-server implementation using SQL Server as a cloud sharing data store and Windows CE on the mobile device. The application is a rewrite. How did you hear about MongoDB? Did you consider other alternatives, like relational or NoSQL databases? I was introduced to MongoDB three years ago when I started working at SilkRoute. We were working on a social network at the time, which was using MongoDB as its primary data store. The RFID mobile application’s technical requirements were originally to use MS SQL Server. This technical requirement was provided by the client. During our working Joint Application Design session with the client, we suggested using MongoDB, but didn’t make headway. When we attended MongoDB World 2015 , we gathered enough details about MongoDB’s capabilities, along with real-world examples of high-volume, transaction-based applications being developed on MongoDB, that we were able to persuade the client to switch from SQL Server to MongoDB. Please describe your MongoDB deployment, technology stack, and the version of MongoDB that you are running. The MongoDB deployment is a 5 node replica set using Cloud Manager for operational management and deployment. The replica set is deployed in the US East AWS region across all availability zones. At this point, we have not implemented sharding. The MongoDB replica set has been deployed in AWS following MongoDB’s best practices using Amazon Linux AMIs. Each production node will be running on EC2 instances with 16 GB memory and 4 core CPUs, with three 100GB provisioned IOPS EBS volumes. Each volume is XFS format. One volume is mapped for “data”, one volume is mapped for “log”, and one volume is mapped for “journal”. The API stack is written in .NET 5 using C# MVC/Web API framework. We are using the MongoDB .NET driver version 2.0. Are you using any tools to monitor, manage and backup your MongoDB deployment? If so what? Do you use Ops Manager / Cloud Manager? The replica set has been deployed and managed using Cloud Manager. Cloud Manager simplified and streamlined replica set deployment and operations. This solution is the first time the majority of team members used MongoDB. To reduce time spent with MongoDB replica set deployment and configuration, Cloud Manager was a great fit. Following Cloud Manager’s directions to create AWS EC2 instances made it very easy for us to create images, and build/tear down replica sets quickly. Streamlining manual tasks allowed the team to focus more time on development than deploying a fully managed MongoDB replica set. In addition to Cloud Manager, the team just started using MongoDB Compass to analyze collections and document sizes. Are you integrating MongoDB with other data analytics, BI or visualization tools? If so, can you share any details? At this point we have not integrated any BI. One of our objectives is to connect with the client’s BI system using the MongoDB Connector for BI and/or extract data from a tagged node to hydrate a SQL-based BI system. We’re planning to perform a POC on the Connector for BI, now that it has been released. How are you measuring the impact of MongoDB on your business? SilkRoute measures MongoDB’s impact by many factors, including ease of use with deployments, a code first approach, increased agile development model, reduced total cost of ownership, and reduced time to market. The ease of deployments reduces or eliminates maintenance windows when spinning up a replica set or upgrading database versions, which means higher uptime for customers and less productive time eaten up for developers. A code first approach adds to increased savings by eliminating daunting DDL script management and aids with better agile development. These factors result in reduced total cost of ownership and faster time to market. Do you use any commercial subscriptions or services to support your MongoDB deployment? SilkRoute is a MongoDB OEM partner. For the RFID application we will be embedding MongoDB Enterprise Server 3.2 and managing the deployment with Cloud Manager. We allocated a budget for MongoDB’s professional services in the early stages of the project. The professional services were tailored to the team’s skill set and agenda. With two separate onsite sessions, we covered topics from deployment, management, and recovery using Cloud Manager, to schema modeling and scaling. The value gained working hands-on directly with a MongoDB consulting engineer was twice the investment. During one session, we encountered a disaster recovery situation in a non-production environment. Unexpected though the situation was, I personally gained the most from the experience of working through the issue with a MongoDB expert in a very collaborative fashion. The professionalism and knowledge of our MongoDB consulting engineer was truly outstanding. Do you have plans to use MongoDB for other applications? If so, which ones? Yes, both internal initiatives and client initiatives. These include BI, a Warehouse Manager SaaS solution, a customer loyalty/couponing app, and client SaaS solutions, which we are not at liberty to disclose at this point. We would prefer to use MongoDB for all application and system development projects. Our preference to use MongoDB for development is based on ease of use, an emphasis on a code first approach for projects going forward, and built-in scalability and durability. Have you upgraded to MongoDB 3.2? What most excites you about this release? We’ve been developing the solution using MongoDB 3.0.x. We are actively migrating the database to version 3.2.1, and the production deployment will use 3.2.1. The most exciting features of MongoDB 3.2 for us are the BI connector, document validation, $lookup, and WiredTiger's in-memory option. We feel the biggest value add to our clients are the BI connector and the in-memory storage engine. The BI connector will allow our clients’ BI environments to integrate directly with the solution we are building, eliminating the need to write ETL processes from MongoDB to a BI environment. The in-memory storage engine will increase performance with read operations, which will reduce latency with API requests. Anything to increase overall performance is a plus. What advice would you give someone who is considering using MongoDB for their next project? I would highly recommend allocating a budget for MongoDB’s professional services to help with operations, deployment, and schema modeling. The value gained with their best practices approach really reduces learning curves and POC time. Coming from a SQL world, prepare ERDs and break the ERDs into schema designs. This approach will help bridge team members from a relational to a non-relational data store. Take a top-down development approach as it will uncover access patterns that may help with schema modeling. Thank you for sharing your MongoDB experiences with us! If you’re comparing MongoDB with relational databases, read our RDBMS to MongoDB Migration Guide to learn more. Read the RDBMS to MongoDB Migration Guide About the Author - Eric Holzhauer Eric is a Product Marketing Manager at MongoDB.
How We Brought the MongoDB University App to Life
Recently we announced the availability of a new mobile app for MongoDB University, available for free in the App Store for iPhone and iPad . I sat down with Shannon Bradshaw, the Director of Education at MongoDB, and Sacha Servan-Schreiber, the engineer who designed and built the application, to get some more details
Part 2: Your App is Taking Off, Now What? It’s Time to Scale Out MongoDB.
In our first post on scaling , we discussed fundamentals of designing a performant and scalable application on MongoDB. Once you’re confident that your application is healthy and ready to grow, it’s time to think about scaling. Before jumping into it, make sure you consider the different ways to scale, and beware of the potential pitfalls: 1. Understand Why You’re Scaling, and What Issues Are Down the Road There are a lot of ways in which an application could be experiencing growth – or constraints to growth! Your workload could be predominantly reads or predominantly writes. Maybe your access operations are under control, but your data volume isn’t. As you grow, you could hit bottlenecks caused by bad schema decisions, inefficient indexing, insufficient RAM, disk speed, network latency, poor planning of transactional vs. analytical queries, or any of dozens of other factors. All of these root causes have different potential solutions. Choosing the right strategy requires understanding your dataset, your users, how you expect to grow, and more. 2. Understand the Trade-Offs: Horizontal vs. Vertical Scaling While MongoDB makes it easy to horizontally scale out with built-in automatic sharding, sharding – or adding more shards – isn't always the only answer. In some cases, making improvements to your hardware can remove constraints that you might be encountering. For example, if your dataset grows and your working set no longer fits in RAM, you might invest in more RAM before deciding to scale out to more machines. Similarly, in some instances, it might make sense to add more or faster disks, or upgrade to SSDs. 3. Choose Your Shard Key Wisely MongoDB supports multiple shard key policies to match your needs. Your shard key selection will impact performance of your cluster. It’s critical that you pick the right key based on your application requirements and expected usage patterns. You want to ensure both even distribution of writes and query isolation – i.e., that queries are targeted to a single shard as much as possible, rather than broadcast (scatter/gather) to all shards. By thinking about these issues before choosing a shard key, you can ensure scalable growth and avoid common sharding pitfalls . So how do I make these decisions about scaling? Take advantage of the many resources we provide. You can start by talking with an expert about scaling strategies for free . When you’re ready to take the next step, our Deployment Topology consulting package is a great way to evaluate your scaling options. And you can always check out our documentation and white papers for more tips.