Andrew Davidson

6 results

Empower Modern App Developers with Document Databases

Across industries, business success depends on a company’s ability to deliver new digital experiences through software. The speed at which a company can develop and deploy a new application with innovative features is a direct lever on business outcomes. Given the vital role developers play in the success of your business, it stands to reason that equipping them with the tools to maximize their productivity is in your best interest. Unfortunately, many organizations are unaware of the tax they’re placing on their development teams by using a relational database. While the relational database has been a bedrock for data-driven applications for 50 years, it was developed in an era before the internet and is a poor fit as the foundation of today’s web and mobile applications. Document databases, which have emerged over the past decade, have cemented themselves as the most popular and widely used alternative to the tabular model found in traditional relational databases. Document databases have become so powerful that even relational databases are trying to emulate them. Built around JavaScript Object Notation (JSON)–like documents, document databases are intuitive for developers to use. Instead of the rigid row-and-column structure of the relational model, document databases map documents directly to objects in code, which is how coders naturally think of and work with data. Let’s break down the key advantages to document databases in building modern applications. We’ll see why the document model’s flexibility eliminates the complex intergroup dependencies that have traditionally slowed developers down. The limitations of the relational database model Relational databases add complexity to a developer’s workload, severely hampering the velocity of work. The rigid row-and-column structure creates a mismatch between the way developers think of code and data, and how they need to store it. Additionally, while the relational model was fine in an age when most applications used a small pre-set of attributes such as last names, ZIP codes, and state abbreviations, the majority of data collected by organizations today is rich in structure. We have given names and sometimes preferred names. We have unique attributes that are relevant to only some of us: For example, people with PhDs have dissertation topics, sports lovers have favorite sports, and our families come in every conceivable shape and size. This richly structured data reflects how we actually think about the real world, and it’s very difficult to flatten, store, analyze, or query using rows and columns. With relational databases, developers can feel stuck in quicksand with changes to their applications requiring them to carefully collaborate with experts like database administrators (DBAs) who help them translate their schemas and queries to underlying relational data models to ensure that indexing strategies are appropriately employed. The layer of indirection increases cognitive load, is hard to reason about, and slows everything down. More than half of application changes require database schema modifications. Those database modifications take longer to complete than the application changes they are designed to support. You can quickly see why these complicated efforts severely slow the delivery of new software features into production. Enabling development of modern apps with document databases With the birth of the internet and the proliferation of mobile and web apps, developers’ roles evolved. The emergence of robust development frameworks, which abstracted away underlying complexity, and the rise of DevOps led organizations to consolidate developer functions. The new generation of full-stack developers wanted databases that better addressed their applications’ requirements and their ways of working with data. The founders of MongoDB recognized a need for a modern database solution while at the adtech giant DoubleClick in 2007. They still were unable to scale to the 400,000 transactions per second the business required due to the constraints of the relational model. These challenges inspired them to create a new, modern, general-purpose database. This database could address the shortcomings of the relational data model and offer a solution that developers actually wanted to use. The result was a horizontally scalable, document-based NoSQL database called MongoDB. The document database model in general, and MongoDB more specifically, addresses the limitations of relational databases in several notable ways: Intuitive data model: The documents at the center of document databases have a universal data format. JSON is a lightweight, language-independent, and human-readable format that has become a widely used standard for storing and exchanging data. These documents map directly to data structures in popular programming languages so there is no need for the additional mapping layer often used with relational databases. Because data that is accessed together is stored together, there is less code to write; developers don’t need to decompose data across tables or run joins. Flexible schema: These JSON-like documents are flexible. Each document can have its own fields, and there’s no need to pre-define the schema in the database. It can be modified at any time. That flexibility enhances developer agility. Meeting user expectations while simplifying application architectures The most innovative applications we use in our daily lives — think Netflix and Instagram — have raised user expectations for what every application should be. Today we expect applications to be: Highly responsive Able to deliver relevant information Optimized for mobile devices Secure Powered by real-time insights Continuously improved Meeting those expectations can be extremely challenging, especially for developers using relational databases. A typical data infrastructure built around a legacy relational database can trap your development team in overly complex and siloed architectures. Document databases, on the other hand, can simplify application architectures. Documents are a superset of all other data models, so developers can store and work with a variety of data types. Development teams can accommodate most of their use cases in a single data model and database The document data model can help your developers overcome the limitations of the relational model while improving their productivity and velocity. Allowing them to minimize the undifferentiated work of maintaining their infrastructure and to focus on meeting demanding user expectations. As a result, they can deliver better, more innovative applications faster than before. Click here to read the original article published on The New Stack.

June 5, 2023

Shared Responsibility: More Agility, Less Risk

The tension between agility, security, and operational uptime can keep IT organizations from innovating as fast as they’d like. On one side, application developers want to move fast and continually deliver innovative new releases. On the other side, InfoSec and IT operations teams aim to continually reduce risk, which can result in a slowed down process. This perception couldn’t be further from the truth. Modern InfoSec and IT operations are evolving into SecOps and DevOps, and the idea that they want to stop developers from innovating by restricting them to old, centrally controlled paradigms is a long-held prejudice that needs to be resolved. What security and site reliability teams really want is for developers to operate with agility as well as safety so that risks are appropriately governed. The shared responsibility model can reduce risk while still allowing for innovation. The challenge of how to enable developers to move fast while ensuring the level of security necessary for SecOps and DevOps is to abstract granular controls away from developers so they can focus on building applications while, in the background, secure defaults that cannot be disabled are in place at every level. Doers get more done Working with a cloud provider, whether you’re talking about infrastructure as a service (IaaS) or a hyperscaler, is like going into a home improvement store and seeing all the tools and materials. It gives you a sense of empowerment. That’s the same feeling you get when you’re in front of an administrative console for AWS, Google Cloud, or Azure. The aisles at home improvement stores, however, can contain some pretty raw materials. Imagine asking a team of developers to build a new, state-of-the-art kitchen out of lumber, pipes, and fittings without even a blueprint. You’re going to wind up with pipes that leak, drawers that don’t close, and cabinets that don’t fit. This approach understandably worries InfoSec and IT operations teams and can cause them to be perceived as innovation blockers because they don’t want developers attempting do-it-yourself security. So how do you find a place where the raw materials provide exactly what you need so that you can build with confidence? That’s the best of both worlds. Developers can move faster by not having to deal with the plumbing, and InfoSec and IT operations get the security and reliability assurance they need. This is where the shared responsibility model comes in. Shared responsibility in the cloud When considering cloud security and resilience, some responsibilities fall clearly on the business. Others fall on public cloud providers, and still others fall on the vendors of the cloud services being used. This is known as the shared responsibility model . Security and resilience in the cloud are only possible when everyone is clear on their roles and responsibilities. Shared responsibility recognizes that cloud vendors, such as MongoDB, must ensure the security and availability of their services and infrastructure, and customers must also take appropriate steps to protect the data they keep in the cloud. The security defaults in MongoDB Atlas enable developers to be agile while also reducing risk. Atlas gives developers the necessary building blocks to move fast without having to worry about the minutiae of administrative security tasks. Atlas enforces strict security policies for things like authentication and network isolation, and it provides tools for ensuring secure best practices, such as encryption, database access, auto-scaling, and granular auditing. Testing for resilience The shared responsibility model attempts to strike a balance between agility, security, and resilience. Cloud vendors must meet the responsibilities of their service-level agreements (SLAs), but businesses also have to be conscientious of their cloud resources. Real-world scenarios can cause businesses to experience outages, and avoiding them is the essence of the shared responsibility model. To avoid such outages, MongoDB Atlas does everything possible to keep database clusters continuously available; the customer holds the responsibility of provisioning appropriately sized workloads. That can be an uphill battle when you’re talking about an intensive workload for which the cluster is undersized. Consider a typical laptop as an example. It has an SLA in so far as it has specifications that determine what it can do. If you try to drive a workload that exceeds the laptop’s specifications, it will freeze. Was the laptop to blame, or was it the workload? With the cloud, there’s an even greater expectation that there are more than enough resources to handle any given workload. But those resources are based on real infrastructure with specs, just like the laptop. This example illustrates both the essence and the ambiguity of the shared responsibility model. As the customer, you’re supposed to know whether that stream of data is something your compute resources can handle. The challenge is that you don’t know it until you start running into the boundaries of your resources, and pushing the limits of those boundaries means risking the availability of those resources. It’s not hard to imagine a developer, who may be working under considerable stress, over-provisioning a workload, which then leads to a freeze or outage. It’s essential, therefore, for companies to have a test environment that closely mimics their production environment. This allows them to validate that the MongoDB Atlas cluster can keep up with what they’re throwing at it. Anytime companies make changes to their applications, there is a risk. Some of that risk may be mitigated by things like auto-scaling and elasticity, but the level of protection they afford is limited. Having a test environment can help companies better predict the outcome of changes they make. The cloud has evolved to a point where security, resilience, and agility can peacefully coexist. MongoDB Atlas comes with strict security policies right out of the box. It offers automated infrastructure provisioning, default security features, database setup, maintenance, and version upgrades so that developers can shift their focus from administrative tasks to innovation when building applications. By abstracting away some of the security and resilience responsibilities through the shared responsibility model, MongoDB Atlas allows developers to move fast while giving SecOps the reassurances they need to support their efforts.

May 11, 2022

How to Evaluate MongoDB 3.4 Using Your Existing Deployment in MongoDB Atlas

MongoDB 3.4 was just released with support for graph processing, multi-language collations, read-only views, faceted navigation, powerful new aggregation operators, numerous performance improvements, and more. MongoDB 3.4 is available in the MongoDB Atlas database service today, so you can quickly and easily deploy new clusters and try it out. But what’s the best way to evaluate MongoDB 3.4 against an existing production workload you have running in MongoDB Atlas? As with any software upgrade, you should first test your application. MongoDB Atlas automates spinning up new production-ready MongoDB clusters, manages backups and restores, and automates on-demand upgrades of MongoDB from 3.2 to 3.4 at any time. By combining these capabilities, you can very easily: Spin up a test environment from a backup Upgrade the test environment from MongoDB 3.2 to MongoDB 3.4 Confirm that your testing and performance suite pass on MongoDB 3.4 Specifically, in this tutorial we’re going to: Start with a production MongoDB Atlas cluster (“Cluster0”) running MongoDB 3.2 with the Atlas backup service enabled. Spin up a new MongoDB Atlas test cluster, initially on MongoDB 3.2, (“cluster34upgrade”) with sufficient storage to restore the production cluster’s backup. Restore the most recent backup snapshot from the production cluster into the test cluster. Upgrade the test cluster from MongoDB 3.2 to MongoDB 3.4 Execute our test suite against the MongoDB 3.4 test cluster. If all goes well, we can upgrade our production cluster to MongoDB 3.4 and terminate our test cluster. This upgrade will be made in-place, and will be completed without application downtime. 1. Start with a MongoDB Atlas cluster (“Cluster0”) running MongoDB 3.2 and with the Atlas backup service enabled. For purposes of this tutorial, we have pre-loaded this cluster and have it under active load. ** 2. Create a new MongoDB Atlas cluster, initially on MongoDB 3.2, (“cluster34upgrade”) with enough storage to restore the production cluster’s backup.** We’ll follow the “Add new cluster” flow and deploy an M20 instance with 40GB of storage, just like our production cluster in this example. Note that for your live production deployments, we do recommend that you run on a MongoDB Atlas M30 or higher. After the cluster deploys, we’ll see the following: 3. Restore the most recent backup snapshot from the production cluster into the test upgrade cluster. We’ll navigate into the “Backup” tab and find the backup for our production cluster. Click into the Options menu (“...”) and select “Restore”. Choose the most recent snapshot. Select “Restore my snapshot” to restore it into our test MongoDB cluster. Select the test upgrade cluster as the destination for the restore. Now the automated restore into the test cluster will be performed. Depending on the size of the data files being restored, this operation could take some time. After the restore finishes, we’ll see the following: 4. Upgrade the test upgrade cluster from MongoDB 3.2 to MongoDB 3.4 Now we can initiate our testing procedures to understand the implications at the application tier during the online upgrade process. To perform the upgrade, we’ll select “Configuration” on the new test cluster which we’ve just restored our backup into: Next, we’ll click “Change version” and select “MongoDB 3.4 with WiredTiger” After selecting MongoDB 3.4, we’ll see the important notice that downgrading from MongoDB 3.4 to 3.2 will not be possible. This is one of the reasons why performing a test cluster upgrade is highly recommended. After deploying we’ll see the blue bar during the upgrade: And quickly we’ll have an upgraded cluster running on MongoDB 3.4: 5. Execute our test suite against the MongoDB 3.4 test upgrade cluster. We will go ahead and confirm connectivity of all of the clients that utilize our cluster and put the new cluster through our entire test and performance suite if it wasn't already running since before initiating the upgrade. We’ll either confirm that we’re ready for MongoDB 3.4, or potentially unearth areas for tuning or necessary driver upgrades, depending on our usage. 6. If our tests pass, we can upgrade our production cluster to MongoDB 3.4 and terminate our test cluster. We’ll click “Configure” on our production cluster Because MongoDB Atlas is built on replica sets, the upgrade process performs a rolling restart of the cluster. As a result, the production application remains available and online. And we can terminate our test cluster if we no longer want to maintain it for testing. Next steps If you’re an existing MongoDB Atlas user on MongoDB 3.2, we encourage you to review compatibility changes in MongoDB 3.4 and the full release notes . You can then follow the process outlined above to restore a backup to a new cluster and upgrade it to MongoDB 3.4 . If you’re new to MongoDB Atlas, you can easily deploy your first MongoDB 3.4 cluster : Just click the "Build a new cluster" button. Then select MongoDB 3.4 in the build cluster form.

December 5, 2016

Introducing VPC Peering for MongoDB Atlas

MongoDB Atlas now allows you to directly peer virtual private clouds (VPCs) in your AWS accounts with the MongoDB Atlas VPC created for your MongoDB clusters. Easily create an extended, private network connecting your application servers and backend databases. VPC Peering in MongoDB Atlas is a significant ease of use and security improvement: Your application servers (and development environments) can directly connect to MongoDB Atlas while remaining isolated from public networks. Automatically scale your application tier without having to manage your database firewall rules. Peer multiple VPCs in the same region from your AWS account(s) to each MongoDB Atlas group. Security groups from your peered VPC can even be referenced in MongoDB Atlas clusters. Tutorial Let’s walk through what using this functionality feels like. Prerequisites: Create an AWS account Create a VPC Enable “DNS hostnames” on the VPC (optional). This will make it possible to immediately resolve the hostnames in the peered MongoDB Atlas clusters VPC to their private IP addresses (otherwise propagation can take up to one hour). Launch instances that you can SSH into Download MongoDB shell software onto those instances to confirm connectivity Create a MongoDB Atlas account Deploy a cluster in the same region as your AWS VPC Step by Step Guide Register for a MongoDB Atlas account. Deploy cluster (US-East region is shown here) While the database cluster is deploying, navigate to the “Security” tab’s “Peering” section Add a New Peering Connection and include the information about your existing VPC (helpful “Show me how” instructions can be found throughout this process) Note that the default VPC used for EC2 instances uses a CIDR block that overlaps with that used by MongoDB Atlas and so cannot be peered – a new one must be created. I created a VPC with a CIDR block “10.0.0.0/16” for testing, like so: Enable “DNS hostnames” on the VPC and record the VPC ID for use in the peering form: Before using the VPC for any EC2 instances, it is necessary to create a new subnet for the VPC, in this case I used the full CIDR of the VPC: Create an EC2 instance using the new VPC and subnet: Fill in the peering request form as shown below (AWS account detail omitted) and include the entire VPC CIDR (10.0.0.0/16); you could optionally include a subset here. Notes: In this example, I am leaving the default option, “Add this CIDR block to my IP whitelist”, selected so that I will be able to immediately connect (but as we’ll see later, I could instead use a security group). Also, because I have already created a MongoDB Atlas cluster, the MongoDB Atlas region and CIDR block cannot be adjusted (if I were in a new MongoDB Atlas group that did not have a cluster yet, I could specify those). At this point, assuming you have correctly filled in the peering request details, you should see “Waiting for Approval”. The UI shown below contains a helpful “How do I approve the connection?” section with two steps: i. Accept the peering request in my AWS account and ii. Add the route table entry for the Atlas CIDR Block shown in the top right so that my VPC routes to the MongoDB Atlas VPC In the AWS Console, under the VPC Dashboard, in the “Peering Connections” section, choose “Accept Request”. In the AWS Console under the “Route Table” for your VPC, choose “Add another rule”, paste in the MongoDB Atlas CIDR block, and associate it with the VPC peering connection. a. b. c. d. e. f. Note that if you don't see a 0.0.0.0/0 route associated with an internet gateway then you should add one if you want to SSH directly into your VPC’s instances from your laptop – this may necessitate creating a new internet gateway. After accepting the Peering Connection in our VPC, MongoDB Atlas will display the Peering Connection as “Available” (this may take up to 10 minutes to show) Now let’s demonstrate connectivity in this tutorial by navigating to our cluster in MongoDB Atlas and clicking “Connect” to follow instructions. a. b. We can confirm that the CIDR block associated with our Peered VPC has already been added to our IP address whitelist c. d. We’ll download and extract the MongoDB shell for the operating system of the instance in our VPC, and use the ‘mongo’ shell instructions shown below e. Success! We’ve connected successfully without having any public IP addresses open to our MongoDB Atlas cluster! Now let’s remove the CIDR block (IP addresses) from our IP Address Whitelist, and demonstrate that we can instead reference a Security Group from our peered VPC a. We’ll navigate to “Security” tab’s “IP Whitelist” section b. After clicking “Delete” on the Peer VPC’s CIDR Block (10.0.0.0/16 in this case) we’ll see c. Let’s add an inbound rule to our EC2 instance’s Security Group such that connectivity on ports 27000-28000 can be made within the Security Group itself d. Now we’ll click “Add IP Address” but specifically enter the security group ID associated with the instance in our VPC e. Now we can confirm connectivity again (with no explicit IP Addresses in our white list) — Awesome! #### Next steps [Register for MongoDB Atlas](https://www.mongodb.com/cloud/atlas?jmp=blog) and deploy your first cluster today!

November 3, 2016

Parse shutdown: How to seamlessly continue operations with AWS and MongoDB Cloud Manager

Editor's note: Existing hosted Parse users can migrate their back-end using Parse's Database Migration tool directly to MongoDB Atlas. See how in this tutorial . Parse was a bold offering in the burgeoning space of Backend-as-a-Service, and we’re sorry to see them wind down. MongoDB was founded to make it easy for application developers to build great products for their users, so we are 100% behind projects that serve that mission on another level; and we are proud that Parse built their backend service on top of MongoDB. Fortunately, Parse has provided their users with a viable and straightforward migration path. Along with their announcement, they released Parse Server, an open source replacement for their hosted backend. To use this solution, a team needs to provision a private MongoDB database, migrate their Parse data to their MongoDB instance, deploy the Parse Server into a hosting environment of their choosing, and update their client to issue API calls to their new Parse Server. MongoDB Cloud Manager’s tight integration with AWS and Azure makes it easy to set up a MongoDB deployment, and AWS’s Elastic Beanstalk makes deploying Parse Server relatively easy. Together with the data migration tool that Parse is providing, the transition can be as seamless and painless as possible. To that end, we have put together a comprehensive guide to migrating a Parse backend to AWS with Cloud Manager and Elastic Beanstalk. A guide to doing the same with Azure is in development and will be posted soon. Using either solution will maximize the portability and stability of your new infrastructure. The guides are aimed at mobile developers without assuming any prior experience with MongoDB, Node.js, or AWS. They provide watertight, high-level checklists, with guidance for what effort you can expect to expend at each stage, as well as detailed, step-by-step directions for each. In addition, they are great jumping-off points to the specific MongoDB documentation relevant to those migrating from Parse, such as how to manage indexes and set up monitoring. Migrate from Parse to MongoDB and AWS Learn more. Join us for a live presentation on the steps required to migrate from the Parse platform to your own deployment of MongoDB on Amazon Web Services.

February 3, 2016