A wide variety of companies around the world, from innovators in the social media space to industry leaders in energy, are running MongoDB on Google Cloud Platform (GCP). Increasingly, these organizations are consuming MongoDB as a fully managed service with MongoDB Atlas, which boosts the productivity of teams that touch the database by reducing the operational overhead of setup, ongoing management, and performance optimization.
When MongoDB Atlas became available on GCP last June, users were able to run it in 4 regions: us-east1 (South Carolina), us-central1 (Iowa), asia-east1 (Taiwan), europe-west1 (Belgium). This week we’re excited to launch the service across all Google Cloud Platform regions, allowing you to easily deploy and run MongoDB near you.
Most GCP regions are made up of 3 isolated locations called zones where resources can be provisioned. MongoDB Atlas automatically distributes a 3-node replica set across the zones in a region, ensuring that the automated election and failover process can complete successfully if the zone containing the primary node becomes unavailable.
For Atlas deployments in GCP’s Singapore region, which contains 2 zones instead of 3, it’s recommended that users enable Atlas’s cross-region replication to obtain a similar level of redundancy.
Atlas is available across all GCP regions now. We’re excited to see what you build with MongoDB and Google services!
Not an Atlas user yet? Get started <a href="https://www.mongodb.com/cloud/atlas?jmp=blog" target=_"BLANK">here.
Considering the Community Effects of Introducing an Official MongoDB Go Driver
What do you do when an open-source project you rely on no longer meets your needs? When your choice affects not just you, but a larger community, what principles guide your decision? Submitting patches is often the first option, but you're at the mercy of the maintainer to accept them. If the changes you need are sweeping, substantial alterations, the odds of acceptance are low. Eventually, only a few realistic options remain: find an alternative, fork the project, or write your own replacement. Everyone who depends on open source faces this conundrum at one time or another. After relying for years on the community-developed mgo Go driver for MongoDB, MongoDB has begun work on a brand-new, internally-developed, open-source Go driver. We know that releasing a company-sponsored alternative to a successful, community-developed project creates tension and uncertainty for users, so we did not make this decision lightly. We carefully considered how our choice would affect current and future Go users of MongoDB. First, some history: Gustavo Niemeyer first announced the mgo community driver in March, 2011 – around the same time that MongoDB released version 1.8.0 of the database. It currently has over 1,800 stars on GitHub and 32 contributors – including several current and former MongoDB employees. The incredible success of MongoDB in the Go community owes a great deal to Gustavo and mgo. MongoDB itself is part of this community. As the Go language matured and gained in popularity, MongoDB found many uses for it internally. Some of the projects using it include: Our remote agents for automated deployment, for backup, and for monitoring. Our command-line operations tools, like mongodump. (Re-written in Go for the 3.0 server release). Our home-grown continuous integration system, Evergreen . Our cloud products, like MongoDB Atlas and Stitch have major components written in Go. From this experience, our engineers contributed back to mgo: over half a dozen employees have commits in mgo, accounting for over 2000 lines of changes. But the more we used mgo, the more we discovered limitations. With our in-house drivers – covering popular languages with deep commercial adoption – we often start driver feature development in parallel with server feature development so that we can test them as soon as the server merges a feature. But as a community project, mgo's feature support generally lags MongoDB server development. More critically, our products that use mgo can't easily test against or take advantage of new server features. Even if we thought that Go didn't yet have critical mass in our user base to justify an in-house driver, our own company's products can't wait for new features. Sometimes, we patched a private copy of mgo to implement new features we critically needed. This isn't always easy. In 2015, we announced our next generation drivers , built upon a published set of specifications for driver behavior. Because mgo predates this work, its conventions and internals don't match our specifications. When the server implements new features and the driver development team writes specs to match, these new specs assume implementation of prior specs. Developing comparable features in mgo can mean starting from a completely different base. Not only does mgo have different internal conventions and behaviors than our in-house drivers, it encapsulates these behaviors in ways we found constraining. Usually, encapsulation is a good thing – a sign of good design – but many of our products benefit from low-level access to sockets, wire protocol models and encoding. End-users don't need this access, but we have the knowledge to work with our own communication protocols and message formats safely and to great effect. We wanted to invite people who wanted something more to try something new, rather than – via forking – implicitly asking people to pick sides in a project they already use. For example, our mongoreplay tool lets users replay a tcpdump of MongoDB server requests against a different server or cluster. When replaying the workload, we need server connection and authentication features – part of mgo's public API – but to replicate per-connection traffic we also need direct control over the number of socket connections and the socket message traffic, all of which is private. To enqueue requests and to read responses we need access to the types representing the wire protocol messages – also private types that are never visible to end users. Over time, we found ourselves copying-and-pasting parts of mgo source into project-specific libraries, or re-implementing parts of the wire protocol or driver behaviors directly. There is a real cost in the time it takes engineers to patch mgo or to write, fix and extend a plethora of internal libraries, plus opportunity costs of having our own products not being able to use our own server's latest features. We decided to consolidate and standardize on one implementation to address all these needs. We considered two alternatives: Fork mgo completely – developing at our pace, modifying internals as needed, and extending the APIs to suit our needs. Develop a new driver – building from the ground up to our specifications, putting it on par with our other officially-maintained drivers. Forking mgo would have a handful of benefits but many challenges. In the benefits column, forking would minimize the impact on our existing products that use mgo as well as for any user who chose to use our fork over the original. In the challenges column, we identified both technical and social considerations that gave us pause. On the technical side, a fork wouldn't solve the large gap to our common specifications, making new feature development much harder than for our internally-developed drivers. It also raises a tough question: what if we implement a new feature in our fork only to find that mgo implements it a different way? The more we might take the internal architecture and the API in a different direction from mgo, the harder it would be keep our fork a "drop-in" replacement and the harder it would be to send patches upstream or to merge in upstream development. We felt a fork would quickly become an independent, backwards-incompatible product, despite a common lineage – undercutting the alleged benefit of forking. On the social side, we knew that anything we released – whether a fork or a new driver – could have a disruptive effect on the existing mgo community. We didn't want to discourage anyone happy using mgo with MongoDB from continuing to use it. We wanted to invite people who wanted something more to try something new, rather than – via forking – implicitly asking people to pick sides in a project they already use. Forking could also imply that we would take on mgo's technical debt, which we wanted to avoid. In light of these challenges, we decided instead to write a new, independently-developed Go driver to join the eleven other drivers in our officially-maintained driver ecosystem . A fresh start allows us to focus our efforts on four main benefits: Velocity: once complete, the new Go driver will evolve as fast as the server does. We'll be able to dog-food new features internally before each server GA release. Consistency: the new Go driver will follow our common specifications from the outset, so the new driver API will feel like other MongoDB drivers, shortening the learning curve for users. We'll also be staying idiomatic to Go, such as supporting context objects for cancellable requests. Performance: a new driver gives an opportunity to provide a new, higher-performance BSON library and design the driver API in a way that gives users more control over memory allocations. Low-level API: for our own in-house products and other power users, we will provide low-level components for reuse, reducing code duplication across the company. Unlike the rest of the driver, this API will have no stability guarantee and no end-user support, but it will let us develop better products faster and our users will benefit that way. Fortunately, we were able to start from a prototype driver custom developed for our BI Connector – written by a former driver engineer – and build from that base towards the common driver specification. We're now finalizing the details of the new BSON library and the core CRUD API. What's next for the driver? In the coming months, we'll ship an "alpha" release of the Go driver and make the code repository public. At that point we’ll ask members of the Go-using MongoDB community to try it out and help us improve it with their feedback. Update, 2/19/2018: The new driver is now in alpha, please read the announcement for more info about trying it out .
Modernize your GraphQL APIs with MongoDB Atlas and AWS AppSync
Modern applications typically need data from a variety of data sources, which are frequently backed by different databases and fronted by a multitude of REST APIs. Consolidating the data into a single coherent API presents a significant challenge for application developers. GraphQL emerged as a leading data query and manipulation language to simplify consolidating various APIs. GraphQL provides a complete and understandable description of the data in your API, giving clients the power to ask for exactly what they need — while making it easier to evolve APIs over time. It complements popular development stacks like MEAN and MERN , aggregating data from multiple origins into a single source that applications can then easily interact with. MongoDB Atlas: A modern developer data platform MongoDB Atlas is a modern developer data platform with a fully managed cloud database at its core. It provides rich features like native time series collections, geospatial data, multi-level indexing, search, isolated workloads, and many more — all built on top of the flexible MongoDB document data model. MongoDB Atlas App Services help developers build apps, integrate services, and connect to their data by reducing operational overhead through features such as hosted Data API and GraphQL API. The Atlas Data API allows developers to easily integrate Atlas data into their cloud apps and services over HTTPS with a flexible, REST-like API layer. The Atlas GraphQL API lets developers access Atlas data from any standard GraphQL client with an API that generates based on your data’s schema. AWS AppSync: Serverless GrapghQL and pub/sub APIs AWS AppSync is an AWS managed service that allows developers to build GraphQL and Pub/Sub APIs. With AWS AppSync, developers can create APIs that access data from one or many sources and enable real-time interactions in their applications. The resulting APIs are serverless, automatically scale to meet the throughput and latency requirements of the most demanding applications, and charge only for requests to the API and by real-time messages delivered. Exposing your MongoDB Data over a scalable GraphQL API with AWS AppSync Together, AWS AppSync and MongoDB Atlas help developers create GraphQL APIs by integrating multiple REST APIs and data sources on AWS. This gives frontend developers a single GraphQL API data source to drive their applications. Compared to REST APIs, developers get flexibility in defining the structure of the data while reducing the payload size by bringing only the attributes that are required. Additionally, developers are able to take advantage of other AWS services such as Amazon Cognito, AWS Amplify, Amazon API Gateway, and AWS Lambda when building modern applications. This allows for a severless end-to-end architecture, which is backed by MongoDB Atlas serverless instances and available in pay-as-you-go mode from the AWS Marketplace . Paths to integration AWS AppSync uses data sources and resolvers to translate GraphQL requests and to retrieve data; for example, users can fetch MongoDB Atlas data using AppSync Direct Lambda Resolvers. Below, we explore two approaches to implementing Lambda Resolvers: using the Atlas Data API or connecting directly via MongoDB drivers . Using the Atlas Data API in a Direct Lambda Resolver With this approach, developers leverage the pre-created Atlas Data API when building a Direct Lambda Resolver. This ready-made API acts as a data source in the resolver, and supports popular authentication mechanisms based on API Keys, JWT, or email-password. This enables seamless integration with Amazon Cognito to manage customer identity and access. The Atlas Data API lets you read and write data in Atlas using standard HTTPS requests and comes with managed networking and connections, replacing your typical app server. Any runtime capable of making HTTPS calls is compatible with the API. Figure 1: Architecture details of Direct Lambda Resolver with Data API Figure 1 shows how AWS AppSync leverages the AWS Lambda Direct Resolver to connect to the MongoDB Atlas Data API. The Atlas Data API then interacts with your Atlas Cluster to retrieve and store the data. MongoDB driver-based Direct Lambda Resolver With this option, the Lambda Resolver connects to MongoDB Atlas directly via drivers , which are available in multiple programming languages and provide idiomatic access to MongoDB. MongoDB drivers support a rich set of functionality and options , including the MongoDB Query Language, write and read concerns, and more. Figure 2: Details the architecture of Direct Lambda Resolvers through native MongoDB drivers Figure 2 shows how the AWS AppSync endpoint leverages Lambda Resolvers to connect to MongoDB Atlas. The Lambda function uses a MongoDB driver to make a direct connection to the Atlas cluster, and to retrieve and store data. The table below summarizes the different resolver implementation approaches. Table 1: Feature comparison of resolver implementations Setup Atlas Cluster Set up a free cluster in MongoDB Atlas. Configure the database for network security and access. Set up the Data API. Secrect Manager Create the AWS Secret Manager to securely store database credentials. Lambda Function Create Lambda functions with the MongoDB Data APIs or MongoDB drivers as shown in this Github tutorial . AWS AppSync setup Set up AWS Appsync to configure the data source and query. Test API Test the AWS AppSync APIs using the AWS Console or Postman . Figure 3: Test results for the AWS AppSync query Conclusion To learn more, refer to the AppSync Atlas Integration GitHub repository for step-by-step instructions and sample code. This solution can be extended to AWS Amplify for building mobile applications. For further information, please contact firstname.lastname@example.org .