GIANT Stories at MongoDB

New to MongoDB Atlas on AWS — AWS Cloud Provider Snapshots, Free Tier Now Available in Singapore & Mumbai


AWS Cloud Provider Snapshots

MongoDB Atlas is an automated cloud database service designed for agile teams who’d rather spend their time building apps than managing databases, backups, and restores. Today, we’re happy to announce that Cloud Provider Snapshots are now available for MongoDB Atlas replica sets on AWS. As the name suggests, Cloud Provider Snapshots provide fully managed backup storage and recovery using the native snapshot capabilities of the underlying cloud service provider.

Choosing a backup method for a database cluster in MongoDB Atlas

When this feature is enabled, MongoDB Atlas will perform snapshots against the primary in the replica set; snapshots are stored in the same cloud region as the primary, granting you control over where all your data lives. Please visit our documentation for more information on snapshot behavior.

Cloud Provider Snapshots on AWS have built-in incremental backup functionality, meaning that a new snapshot only saves the data that has changed since the previous one. This minimizes the time it takes to create a snapshot and lowers costs by reducing the amount of duplicate data. For example, a cluster with 10 GB of data on disk and 3 snapshots may require less than 30 GB of total snapshot storage, depending on how much of the data changed between snapshots.

Cloud Provider Snapshots are available for M10 clusters or higher in all of the 15 AWS regions where you can deploy MongoDB Atlas clusters.

Consider creating a separate Atlas project for database clusters where a different backup method is required. MongoDB Atlas only allows one backup method per project. Once you select a backup method — whether it’s Continuous Backup or Cloud Provider Snapshots — for a cluster in a project, Atlas locks the backup service to the chosen method for all subsequent clusters in that project. To change the backup method for the project, you must disable backups for all clusters in the project, then re-enable backups using your preferred backup methodology. Atlas deletes any stored snapshots when you disable backup for a cluster.


Free, $9, and $25 MongoDB Atlas clusters now available in Singapore & Mumbai

We’re committed to lowering the barrier to entry to MongoDB Atlas and allowing developers to build without worrying about database deployment or management. Last week, we released a 14% price reduction on all MongoDB Atlas clusters deployed in AWS Mumbai. And today, we’re excited to announce the availability of free and affordable database cluster sizes in South and Southeast Asia on AWS .

Free M0 Atlas clusters, which provide 512 MB of storage for experimentation and early development, can now be deployed in AWS Singapore and AWS Mumbai. If more space is required, M2 and M5 Atlas clusters, which provide 2 GB and 5 GB of storage, respectively, are now also available in these regions for just $9 and $25 per month.

MongoDB Atlas Price Reduction - AWS Mumbai

Developers use MongoDB Atlas, the fully automated cloud service for MongoDB, to quickly and securely create database clusters that scale effortlessly to meet the needs of a new generation of applications.

We recognize that the developer community in India is an incredibly vibrant one, one that is growing rapidly thanks to startups like Darwinbox. The team there built a full suite of HR services online, going from a standing start to a top-four sector brand in the Indian market in just two years.

As part of our ongoing commitment to support the local developer community and lower the barrier to entry to using a MongoDB service that removes the need for time-consuming administration tasks, we are excited to announce a price reduction for MongoDB Atlas. Prices are being reduced by up to 14% on all MongoDB Atlas clusters deployed in AWS Mumbai. With this, we aim to give more developers access to the best way to work with data, automated with built-in best practices.

MongoDB Atlas is available in India on AWS Mumbai and GCP Mumbai. It is also available on Microsoft Azure in Pune, Mumbai and Chennai. Never tried MongoDB Atlas? Click here to learn more.

DarwinBox Evolves HR SaaS Platform and Prepares for 10x Growth with MongoDB Atlas

DarwinBox found a receptive market for its HR SaaS platform for medium to large businesses, but rapid success strained their infrastructure and challenged their resources. We talked to Chaitanya Peddi, Co-founder and Head of Product to find out how they addressed those challenges with MongoDB Atlas.

Evolution favors those that find ways to thrive in changing environments. DarwinBox has done just that, providing a full spectrum of HR services online and going from a standing start to a top-four sector brand in the Indian market in just two years. From 40 enterprise clients in its first year to more than 80 in its second, it now supports over 200,000 employees, and is hungrily eyeing expansion in new territories.

“We’re expecting 10x growth in the next two years,” says Peddi. “That means aggressive scaling for our platform and MongoDB Atlas will play a big role."

Starting from a blank sheet of paper

The company’s key business insight is that employees have grown accustomed to the user experience of online services they access in their personal lives. However, the same ease of use is simply not found at work, especially in HR solutions that address holiday booking, managing benefits, and appraisals. DarwinBox’s approach is to deliver a unified platform of user-friendly HR services to replace a jumble of disparate offerings, and to do so in a way that supports its own aggressive growth plans. The company aims to support nearly every employee interaction with corporate HR, such as recruitment, employee engagement, expense management, separation, and more.

“We started in 2015 from a blank sheet of paper,” Peddi says. “It became very clear very quickly that for most of our use cases, only a non-relational database would work. Not only did we want to provide an exceptionally broad set of integrated services, but we also had clients with a large number of customization requirements. This meant we needed a very flexible data model. We looked at a lot of options. We wanted an open source technology to avoid lock-in and our developers pushed for MongoDB, which fit all our requirements and was a pleasure to work with. Our databases are now 90 percent MongoDB. We expect that to be at 100 percent soon.”

Reducing costs and future-proofing database management

When DarwinBox launched, it ran its databases in-house, which wasn’t ideal. “We have a team of 40+ developers, QA and testers, and three running infrastructure, and suddenly we’re growing much faster than we expected. It’s a good problem to have, but we couldn’t afford to offer anything less than excellent service.” Peddi emphaszied that of all the things they wanted to do to succeed, becoming database management experts wasn’t high on the list.

This wasn’t the only reason that MongoDB Atlas looked like the next logical step for the company when it became available, says Peddi, “We were rapidly developing our services and our customer base, but our strategies for backing up the databases, for scaling, for high availability, and for monitoring performance weren’t keeping up. In the end, we decided that we’d migrate to Atlas for a few major reasons.”

The first reason was the most obvious. “The costs of managing the databases, infrastructure, and backups were increasing. In addition, it became increasingly difficult to self-manage everything as requirements became more sophisticated and change requests became more frequent. Scaling up and down to match demand and launching new clusters consumed precious man hours. Monitoring performance and issue resolution was taking up more time than we wanted. We had built custom scripts, but they weren’t really up to the task.”

With MongoDB Atlas on AWS, Peddi says, all these issues are greatly reduced. “We’re able to do everything we need with our fully managed database very quickly – scale according to business need at the press of a button, for example. There are other benefits. With MongoDB technical engineers a phone call away, we’re able to fix issues far quicker than we could in the past. MongoDB Compass, the GUI for the database, is proving helpful in letting our teams visually explore our data and tune things accordingly.”

Migrating to Atlas has also helped Darwinbox dramatically reduce costs.

We’ve optimized our database infrastructure and how we manage backups. Not only did we bring down costs by 40%, but by leveraging the queryable snapshot feature, we’re able to restore the data we actually need 80% faster.

Chaitanya Peddi, Co-founder and Head of Product, DarwinBox

The increased availability and data resilience from the switch to MongoDB Atlas on AWS eases the responsibility in managing the details of 200,000 employees’ working lives. “Data is the most sensitive part of our business, the number one thing that we care about,” says Peddi, “We can’t lose even 0.00001 percent of our data. We used to take snapshots of the database, but that was costly and difficult to manage. Now, it’s more a live copy process. We can guarantee data retention for over a year, and it only takes a few moments to find what you need with MongoDB Atlas.”

For DarwinBox to achieve its target of 10x growth in two years, it has to – and plans to – go international.

“We had that in mind from the outset. We’ve designed our architecture to cope with a much larger scale, both in total employee numbers and client numbers, and to handle different regulatory regimes.” According to Peddi, that means moving to microservices, developing data analytics, maybe even looking at other cloud providers to host the DarwinBox HR Platform. He added: “If we were to do this on AWS and self-manage the database with our current resources, we would have to invest a significant amount of effort into orchestrating and maintaining a globally distributed database. MongoDB Atlas with its cross-region capabilities makes this all much easier.”

Darwinbox is confident that MongoDB Atlas will help the organization achieve its product plans.

“MongoDB Atlas will be able to support the business needs that we've planned out for the next two years.” says Peddi, “We’re happy to see how rapidly the Atlas product roadmap is evolving.”

Get started with MongoDB Atlas and deploy a free database in minutes.

Bienvenue à MongoDB Atlas: MongoDB as a Service Now Available in France

Leo Zheng
May 07, 2018
Cloud

En français

MongoDB Atlas, the fully automated cloud database, is now available in France on Amazon Web Services and Microsoft Azure. Located in the Paris area, these newly supported cloud regions will allow organizations using MongoDB Atlas to better serve their customers in and around France. For deployments in AWS EU (Paris), the following instance sizes are supported. MongoDB Atlas deployments in this cloud region will automatically be distributed across three AWS availability zones (AZ), ensuring that the failure of a single AZ will not impact the database’s automated election and failover process. Currently, customers deploying to AWS EU (Paris) can also replicate their data to regions of their choosing (to provide even greater fault tolerance or fast, responsive read access) if they’re using the M80 (low CPU), M200 (low CPU), or M400 (low CPU) instance sizes.

For MongoDB Atlas deployments in Azure France Central, the following instance sizes are supported. Deployments in this cloud region will automatically be distributed across 2 Azure fault domains. Assuming that a customer is deploying a 3-node replica set, 2 of those nodes will be located in 1 fault domain and the last node will live in its own fault domain. While this configuration does have a higher chance of loss of availability in the event that a fault domain goes down, cross-region replication can be configured to withstand fault domain and regional outages and is compatible with any Atlas instance size available in Azure France Central.

MongoDB is certified under the EU-US Privacy Shield, and the MongoDB Cloud Terms of Service now includes GDPR-required data processing terms to help MongoDB Atlas customers prepare for May 25, 2018 when the GDPR becomes enforceable.

MongoDB Atlas in France is open for business now and you can start using it today! Get started here.






MongoDB Atlas, la base de donnée entièrement automatisée dans le cloud, est maintenant disponible en France sur Amazon Web Services et Microsoft Azure. Localisés dans la région Parisienne, ces data centers nouvellement supportés permettront à votre organisation d’utiliser MongoDB Atlas pour répondre au mieux aux besoins de vos clients en France et ses environs. Pour les déploiements Atlas sur AWS EU (Paris), les tailles d’instances suivantes sont supportées. Les déploiements sur Atlas dans cette région du cloud seront automatiquement distribués au travers de trois zones de disponibilités pour assurer qu’une panne dans l’une de ces zones n’impacte pas le système d’élection automatique et le processus de basculement vers un nouveau noeud. Actuellement, les clients d’Atlas qui déploient sur AWS EU (Paris) peuvent aussi répliquer leurs données dans les autres régions de leur choix (pour permettre une encore plus grande résistance à la panne ou pour des accès en lecture plus réactifs et plus rapides) s'ils utilisent les tailles d’instances M80 (CPU faible), M200 (CPU faible), ou M400 (CPU faible).

Pour les déploiements dans “Azure France Central”, les tailles d’instances suivantes sont supportées. Les déploiements Atlas dans cette région du cloud seront automatiquement distribuée dans deux data centers Azure. En supposant qu’un client déploie un replica set de trois noeuds, deux de ces noeuds seront localisés dans un data center et le dernier sera situé dans son propre data center. Bien que cette configuration possède plus de chance de perte de disponibilité dans le cas d’une panne sur un datacenter entier, la réplication au travers de plusieurs régions peut être configurée pour résister à une panne générale d’un datacenter ou à des coupures régionales. Cette réplication inter-régionale est compatible avec n’importe quelle taille d’instance disponible sur Azure France Central.

MongoDB est certifié dans le cadre du Privacy Shield EU-US, et les conditions d'utilisation de MongoDB Cloud incluent désormais les termes de traitement de données requis par GDPR pour aider les clients de MongoDB Atlas à se préparer pour le 25 mai 2018.

MongoDB Atlas in France is open for business now and you can start using it today! Get started here.

STREAM: How MongoDB Atlas and AWS help make it easier to build, scale, and personalize feeds that reach millions of users

This is a guest post by Ken Hoff of Stream (getstream.io).

Stream is a platform designed for building, personalizing, and scaling activity feeds that reach over 200 million users. We offer an alternative to building app feed functionality from scratch by simplifying implementation and maintenance so companies can stay focused on what makes their products unique.

Today our feed-as-a-service platform helps personalize user experiences for some of the most engaging applications and websites. For example, Product Hunt, which surfaces new products daily and allows enthusiasts to share and geek out about the latest mobile apps, websites, and tech creations, uses our API to do so.

We’ve recently been working on an application called Winds, an open source RSS and podcast application powered by Stream, that provides a new and personalized way to listen, read, and share content.

We chose MongoDB to support the first iteration of Winds as our developers found the database very easy to work with. I personally feel that the mix of data model flexibility, scalability, and rich functionality that you get with MongoDB makes it superior to what you would get out of the box with other NoSQL databases or tabular databases such as MySQL and PostgreSQL.

Our initial MongoDB deployment was managed by a vendor called Compose but that ultimately didn’t work out due to issues with availability and cost. We migrated off Compose and built our own self-managed deployment on AWS. When MongoDB’s own database as a service, MongoDB Atlas, was introduced to us, we were very interested. We wanted to reduce the operational work that our team was doing and found Atlas’s pricing much more predictable than what we had experienced with our previous MongoDB service provider. We also needed a database service that would be highly available out of the box. The fact that MongoDB Atlas sets a minimum replica set member count and automatically distributes each cluster across AWS availability zones had us sold.

The great thing about managing or scaling MongoDB with MongoDB Atlas is that pretty much almost all of the time, we don’t have to worry about it. We run our application on a deployment using the M30 size instances with the auto-expanding storage option enabled. When our disk utilization approaches 90%, Atlas automatically provisions us more with no impact to availability. And if we experience spikes in traffic like we have in the past, we can easily scale up or out using MongoDB Atlas by either clicking a few buttons in the UI or triggering a scaling event using the API.

Another benefit that MongoDB Atlas has provided us is on the cost savings side. With Atlas, we no longer need a dedicated person to worry about operations or maintaining uptime. Instead, that person can work on the projects that we’d rather have them working on. In addition, our team is able to move much faster. Not only can we make changes on the fly to our application leveraging MongoDB’s flexible data model, but we can deploy any downstream database changes on the fly or easily spin up new clusters to test new ideas. All of these can happen without impacting things in production; no worrying about provisioning infrastructure, setting up backups, monitoring, etc. It’s a real thing of beauty.

In the near future, we plan to look into utilizing change streams from MongoDB 3.6 for our Winds application, which is already undergoing some major upgrades (users can sign up for the beta here). This may eliminate the need to maintain separate Redis instances, which would further increase our savings and reduce architectural complexity.

We’re also looking into migrating more applications onto MongoDB Atlas as its built-in high availability, automation, fully managed backups, and performance optimization tools make it a no-brainer. While there are other MongoDB as a service providers out there (Compose, mLab, etc.) available, no other solution comes close to what MongoDB Atlas can provide.

---

Interested in reducing costs and faster time to market? Get started today with a free 512 MB database managed by MongoDB Atlas.


Be a part of the largest gathering of the MongoDB community. Join us at MongoDB World.

MongoDB Enterprise Running on OpenShift

Jason Mimick
April 13, 2018
Cloud

Update: May 2, 2018
Our developer preview of MongoDB Enterprise Server running on OpenShift now includes a simple OpenShift Template. The mongodb-openshift-dev-preview.template.yaml template file reduces the complexity and additional requirements of running OpenShift with the --service-catalog enabled and deploying the Ansible Service Broker (not to mention needing to install the apb tool on your development system in order to build and deploy the Ansible Playbook Bundle). Currently the template can provision multiple pods each running an automation agent configured to the same MongoDB Ops Manager Project. You can complete the deployment of a MongoDB replica set with a few quick clicks in the Ops Manager user interface. We hope the removal of these additional dependencies helps you and your organization quickly adopt this modern, flexible, and full featured way to deploy and run MongoDB Enterprise on your OpenShift clusters. And, stay tuned! This is the tip of the iceberg for support of your cloud native workloads from MongoDB.


In order to compete and get products to market rapidly enterprises today leverage cloud-ready and cloud-enabled technologies. Platforms as a Service (or PaaS) provide out-of-the-box capabilities which enable application developers to focus on their business logic and users instead of infrastructure and interoperability. This key ability separates successful projects from those which drown themselves in tangential work which never stops.

In this blog post, we'll cover MongoDB's general PaaS and cloud enablement strategy as well as touch upon some new features of Red Hat’s OpenShift which enable you to run production-ready MongoDB clusters. We're also excited to announce the developer preview of MongoDB Enterprise Server running on OpenShift. This preview allows you to test out how your applications will interact with MongoDB running on OpenShift.

Integration Approach for MongoDB and PaaS

Platforms as a Service are increasingly popular, especially for those of you charged with building "cloud-enabled" or "cloud-ready" applications but required to use private data center deployments today. Integrating a database with a PaaS needs to be done appropriately to ensure that database instances can be deployed, configured, and administered properly.

There are two common components of any production-ready cloud-enabled database deployment:

  • A modern, rock-solid database (like MongoDB).
  • Tooling to enable telemetry, access and authorization, and backups (not to mention things like proactive alerting that integrates with your chosen issue tracking system, complete REST-based APIs for automation, and a seamless transition to hosted services.) For MongoDB, this is MongoDB Ops Manager.

A deep integration of MongoDB Ops Manager is core to our approach of integrating MongoDB with popular PaaS offerings. The general design approach is to use the "separation of concerns" design principle. The chosen PaaS handles the physical or virtual machines, CPU and RAM allotment, persistent storage requirements, and machine-level access control, while MongoDB Ops Manager controls all aspects of run-time database deployments

This strategy enables system administrators to quickly deploy "MongoDB as a Solution" offerings within their own data centers. In turn, enterprise developers can easily self-service their own database needs.

If you haven't already, download MongoDB Ops Manager for the best way to run MongoDB.

MongoDB Enterprise Server OpenShift Developer Preview

Our "developer preview" for MongoDB on OpenShift can be found here: https://github.com/jasonmimick/mongodb-openshift-dev-preview. The preview allows provisioning of both MongoDB replica sets and "agent-only" nodes (for easy later use as MongoDB instances) directly through OpenShift. The deployments automatically register themselves with an instance of MongoDB Ops Manager. All the technical details and notes of getting started can be found right in the repo. Here we'll just describe some of functionality and technology used.

The preview requires access to an OpenShift cluster running version 3.9 or later and takes advantage of the new Kubernetes Service Catalog features. Specifically, we're using the Ansible Service Broker and have build an Ansible Playbook Bundle which installs an icon into your OpenShift console. The preview also contains an example of an OpenShift template which supports replica sets and similar functionality.

A tour and deploying your first cluster:

Once you have your development environment ready (see notes in the developer preview Github repository) and have configured an instance of MongoDB Ops Manager you're ready to starting deploying MongoDB Enterprise Server.

Clusters can be provisioned through the OpenShift web console or via command line. The web console provides an intuitive "wizard-like" interface in which users specify values for various parameters, such as MongoDB version, storage size allocation, and MongoDB Ops Manager Organization/Project to name a few.

Command line installs are also available in which parameter values can be scripted or predefined. This extensibility allows for automation and integration with various Continuous Integration and Continuous Delivery technologies.

A future post will detail cluster configuration and various management scenarios, such as upgrades, performance tuning, and troubleshooting connectivity, so stay tuned.

We're excited to introduce simple and efficient ways to manage your MongoDB deployments with tools such as OpenShift and Kubernetes. Please try out the developer preview and drop us a line on Twitter #mongodb-openshift or email bd@mongodb.com for more information.


Be a part of the largest gathering of the MongoDB community. Join us at MongoDB World.

Modern Distributed Application Deployment with Kubernetes and MongoDB Atlas

Jay Gordon
April 05, 2018
Technical, Cloud

Storytelling is one of the parts of being a Developer Advocate that I enjoy. Sometimes the stories are about the special moments when the team comes together to keep a system running or build it faster. But there are less than glorious tales to be told about the software deployments I’ve been involved in. And for situations where we needed to deploy several times a day, now we are talking nightmares.

For some time, I worked at a company that believed that deploying to production several times a day was ideal for project velocity. Our team was working to ensure that advertising software across our media platform was always being updated and released. One of the issues was a lack of real automation in the process of applying new code to our application servers.

What both ops and development teams had in common was a desire for improved ease and agility around application and configuration deployments. In this article, I’ll present some of my experiences and cover how MongoDB Atlas and Kubernetes can be leveraged together to simplify the process of deploying and managing applications and their underlying dependencies.

Let's talk about how a typical software deployment unfolded:

  1. The developer would send in a ticket asking for the deployment
  2. The developer and I would agree upon a time to deploy the latest software revision
  3. We would modify an existing bash script with the appropriate git repository version info
  4. We’d need to manually back up the old deployment
  5. We’d need to manually create a backup of our current database
  6. We’d watch the bash script perform this "Deploy" on about six servers in parallel
  7. Wave a dead chicken over my keyboard

Some of these deployments would fail, requiring a return to the previous version of the application code. This process to "rollback" to a prior version would involve me manually copying the repository to the older version, performing manual database restores, and finally confirming with the team that used this system that all was working properly. It was a real mess and I really wasn't in a position to change it.

I eventually moved into a position which gave me greater visibility into what other teams of developers, specifically those in the open source space, were doing for software deployments. I noticed that — surprise! — people were no longer interested in doing the same work over and over again.

Developers and their supporting ops teams have been given keys to a whole new world in the last few years by utilizing containers and automation platforms. Rather than doing manual work required to produce the environment that your app will live in, you can deploy applications quickly thanks to tools like Kubernetes.

What's Kubernetes?

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Kubernetes can help reduce the amount of work your team will have to do when deploying your application. Along with MongoDB Atlas, you can build scalable and resilient applications that stand up to high traffic or can easily be scaled down to reduce costs. Kubernetes runs just about anywhere and can use almost any infrastructure. If you're using a public cloud, a hybrid cloud or even a bare metal solution, you can leverage Kubernetes to quickly deploy and scale your applications.

The Google Kubernetes Engine is built into the Google Cloud Platform and helps you quickly deploy your containerized applications.

For the purposes of this tutorial, I will upload our image to GCP and then deploy to a Kubernetes cluster so I can quickly scale up or down our application as needed. When I create new versions of our app or make incremental changes, I can simply create a new image and deploy again with Kubernetes.

Why Atlas with Kubernetes?

By using these tools together for your MongoDB Application, you can quickly produce and deploy applications without worrying much about infrastructure management. Atlas provides you with a persistent data-store for your application data without the need to manage the actual database software, replication, upgrades, or monitoring. All of these features are delivered out of the box, allowing you to build and then deploy quickly.

In this tutorial, I will build a MongoDB Atlas cluster where our data will live for a simple Node.js application. I will then turn the app and configuration data for Atlas into a container-ready image with Docker.

MongoDB Atlas is available across most regions on GCP so no matter where your application lives, you can keep your data close by (or distributed) across the cloud.

Figure 1: MongoDB Atlas runs in most GCP regions

Requirements

To follow along with this tutorial, users will need some of the following requirements to get started:

First, I will download the repository for the code I will use. In this case, it's a basic record keeping app using MongoDB, Express, React, and Node (MERN).

bash-3.2$ git clone git@github.com:cefjoeii/mern-crud.git
Cloning into 'mern-crud'...
remote: Counting objects: 326, done.
remote: Total 326 (delta 0), reused 0 (delta 0), pack-reused 326
Receiving objects: 100% (326/326), 3.26 MiB | 2.40 MiB/s, done.
Resolving deltas: 100% (137/137), done.

cd mern-crud

Next, I will npm install and get all the required npm packages installed for working with our app:

> uws@9.14.0 install /Users/jaygordon/work/mern-crud/node_modules/uws
> node-gyp rebuild > build_log.txt 2>&1 || exit 0

Selecting your GCP Region for Atlas

Each GCP region includes a set number of independent zones. Each zone has power, cooling, networking, and control planes that are isolated from other zones. For regions that have at least three zones (3Z), Atlas deploys clusters across three zones. For regions that only have two zones (2Z), Atlas deploys clusters across two zones.

The Atlas Add New Cluster form marks regions that support 3Z clusters as Recommended, as they provide higher availability. If your preferred region only has two zones, consider enabling cross-region replication and placing a replica set member in another region to increase the likelihood that your cluster will be available during partial region outages.

The number of zones in a region has no effect on the number of MongoDB nodes Atlas can deploy. MongoDB Atlas clusters are always made of replica sets with a minimum of three MongoDB nodes.

For general information on GCP regions and zones, see the Google documentation on regions and zones.

Create Cluster and Add a User

In the provided image below you can see I have selected the Cloud Provider "Google Cloud Platform." Next, I selected an instance size, in this case an M10. Deployments using M10 instances are ideal for development. If I were to take this application to production immediately, I may want to consider using an M30 deployment. Since this is a demo, an M10 is sufficient for our application. For a full view of all of the cluster sizes, check out the Atlas pricing page. Once I’ve completed these steps, I can click the "Confirm & Deploy" button. Atlas will spin up my deployment automatically in a few minutes.

Let’s create a username and password for our database that our Kubernetes deployed application will use to access MongoDB.

  • Click "Security" at the top of the page.
  • Click "MongoDB Users"
  • Click "Add New User"
  • Click "Show Advanced Options"
  • We'll then add a user "mernuser" for our mern-crud app that only has access to a database named "mern-crud" and give it a complex password. We'll specify readWrite privileges for this user:

Click "Add User"

Your database is now created and your user is added. You still need our connection string and to whitelist access via the network.

Connection String

Get your connection string by clicking "Clusters" and then clicking "CONNECT" next to your cluster details in your Atlas admin panel. After selecting connect, you are provided several options to use to connect to your cluster. Click "connect your application."

Options for the 3.6 or the 3.4 versions of the MongoDB driver are given. I built mine using the 3.4 driver, so I will just select the connection string for this version.

I will typically paste this into an editor and then modify the info to match my application credentials and my database name:

I will now add this to the app's database configuration file and save it.

Next, I will package this up into an image with Docker and ship it to Google Kubernetes Engine!

Docker and Google Kubernetes Engine

Get started by creating an account at Google Cloud, then follow the quickstart to create a Google Kubernetes Project.

Once your project is created, you can find it within the Google Cloud Platform control panel:

It's time to create a container on your local workstation:

Set the PROJECT_ID environment variable in your shell by retrieving the pre- configured project ID on gcloud with the command below:

export PROJECT_ID="jaygordon-mongodb"
gcloud config set project $PROJECT_ID
gcloud config set compute/zone us-central1-b

Next, place a Dockerfile in the root of your repository with the following:

FROM node:boron

RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

COPY . /usr/src/app

EXPOSE 3000

CMD [npm, start]

To build the container image of this application and tag it for uploading, run the following command:

bash-3.2$ docker build -t gcr.io/${PROJECT_ID}/mern-crud:v1 .
Sending build context to Docker daemon  40.66MB
Successfully built b8c5be5def8f
Successfully tagged gcr.io/jgordon-gc/mern-crud:v1

Upload the container image to the Container Registry so we can deploy to it:

Successfully tagged gcr.io/jaygordon-mongodb/mern-crud:v1
bash-3.2$ gcloud docker -- push gcr.io/${PROJECT_ID}/mern-crud:v1The push refers to repository [gcr.io/jaygordon-mongodb/mern-crud]

Next, I will test it locally on my workstation to make sure the app loads:

docker run --rm -p 3000:3000 gcr.io/${PROJECT_ID}/mern-crud:v1
> mern-crud@0.1.0 start /usr/src/app
> node server
Listening on port 3000

Great — pointing my browser to http://localhost:3000 brings me to the site. Now it's time to create a kubernetes cluster and deploy our application to it.

Build Your Cluster With Google Kubernetes Engine

I will be using the Google Cloud Shell within the Google Cloud control panel to manage my deployment. The cloud shell comes with all required applications and tools installed to allow you to deploy the Docker image I uploaded to the image registry without installing any additional software on my local workstation.

Now I will create the kubernetes cluster where the image will be deployed that will help bring our application to production. I will include three nodes to ensure uptime of our app.

Set up our environment first:

export PROJECT_ID="jaygordon-mongodb"
gcloud config set project $PROJECT_ID
gcloud config set compute/zone us-central1-b

Launch the cluster

gcloud container clusters create mern-crud --num-nodes=3

When completed, you will have a three node kubernetes cluster visible in your control panel. After a few minutes, the console will respond with the following output:

Creating cluster mern-crud...done.
Created [https://container.googleapis.com/v1/projects/jaygordon-mongodb/zones/us-central1-b/clusters/mern-crud].
To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/us-central1-b/mern-crud?project=jaygordon-mongodb
kubeconfig entry generated for mern-crud.
NAME       LOCATION       MASTER_VERSION  MASTER_IP       MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
mern-crud  us-central1-b  1.8.7-gke.1     35.225.138.208  n1-standard-1  1.8.7-gke.1   3          RUNNING

Just a few more steps left. Now we'll deploy our app with kubectl to our cluster from the Google Cloud Shell:

kubectl run mern-crud --image=gcr.io/${PROJECT_ID}/mern-crud:v1 --port 3000

The output when completed should be:

jay_gordon@jaygordon-mongodb:~$ kubectl run mern-crud --image=gcr.io/${PROJECT_ID}/mern-crud:v1 --port 3000
deployment "mern-crud" created

Now review the application deployment status:

jay_gordon@jaygordon-mongodb:~$ kubectl get pods
NAME                         READY     STATUS    RESTARTS   AGE
mern-crud-6b96b59dfd-4kqrr   1/1       Running   0          1m
jay_gordon@jaygordon-mongodb:~$

We'll create a load balancer for the three nodes in the cluster so they can be served properly to the web for our application:

jay_gordon@jaygordon-mongodb:~$ kubectl expose deployment mern-crud --type=LoadBalancer --port 80 --target-port 3000 
service "mern-crud" exposed

Now get the IP of the loadbalancer so if needed, it can be bound to a DNS name and you can go live!

jay_gordon@jaygordon-mongodb:~$ kubectl get service
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)        AGE
kubernetes   ClusterIP      10.27.240.1              443/TCP        11m
mern-crud    LoadBalancer   10.27.243.208   35.226.15.67   80:30684/TCP   2m

A quick curl test shows me that my app is online!

bash-3.2$ curl -v 35.226.15.67
* Rebuilt URL to: 35.226.15.67/
*   Trying 35.226.15.67...
* TCP_NODELAY set
* Connected to 35.226.15.67 (35.226.15.67) port 80 (#0)
> GET / HTTP/1.1
> Host: 35.226.15.67
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Powered-By: Express

I have added some test data and as we can see, it's part of my deployed application via Kubernetes to GCP and storing my persistent data in MongoDB Atlas.

When I am done working with the Kubernetes cluster, I can destroy it easily:

gcloud container clusters delete mern-crud

What's Next?

You've now got all the tools in front of you to build something HUGE with MongoDB Atlas and Kubernetes.

Check out the rest of the Google Kubernetes Engine's tutorials for more information on how to build applications with Kubernetes. For more information on MongoDB Atlas, click here.

Have more questions? Join the MongoDB Community Slack!

Continue to learn via high quality, technical talks, workshops, and hands-on tutorials. Join us at MongoDB World.

Longbow Advantage - Helping companies move beyond the spreadsheet for a real-time view of logistics operations

The global market in supply chain analytics is estimated at some $2.7 billion[1] — and yet, far too often supply chain leaders use spreadsheets to manage their operation, limiting the real-time visibility into their systems.

Longbow Advantage, a supply chain partner, helps companies get the maximum ROI from their supply chain software products. Moving beyond the spreadsheet and generic enterprise BI tools, Longbow developed an application called Rebus™ which allows users to harness the power of smart data and get real-time visibility into their entire supply chain. That means ingesting data in many formats from a wide range of systems, storing it for efficient reference, and presenting it as needed to users — at scale.

MongoDB Atlas is at the heart of Rebus. We talked to Alex Wakefield, Chief Commercial Officer, to find out why they chose to trust such a critical part of their business to MongoDB and how it’s panned out both technically and commercially.

---

Tell us a little bit about Longbow Advantage. How did you come up with the idea?

Sixteen years ago our Founder, Gerry Brady, left his job at a distribution company to build Longbow Advantage. The goal was to build a company that could help streamline warehouse and workforce management implementations, upgrades, and integrations, and put more focus on customer experience and success.

Companies of all sizes have greatly improved distribution processes but still lack real-time visibility into their systems. While there’s a desire to use BI/analytics systems, automate manual processes, and work with information in as close to real-time as possible, most companies continue to rely on manually generated spreadsheets to measure their logistics KPIs, slowing down speed to insights.

There had to be a better way to help companies address this problem. We built an application called Rebus. This SaaS-based analytics platform, used by industry leaders such as Del Monte Foods and Subaru of America, aggregates and harmonizes logistics data from any supply chain execution software to provide a near real-time view of logistics operations and deliver cross-functional insights. The idea is quite simply to provide more accurate data in as close to real-time as technically possible within a common platform that can be shared across the supply chain.

For example, one company may have a KPI around labor productivity. When that company receives a customer order to ship, there is a lot of information they want to know:

  • Was the order shipped and on-time?
  • How efficiently is the labor staff filling orders?
  • How many orders are processing?
  • How many individual lines or tasks on the order are being filled?

The list goes on. With Rebus, manufacturers, retailers and distributors can segment different business lines like ecommerce, traditional retail, direct to consumer and more, to ensure that they are being productive and meeting the appropriate deadlines. Without this information, a company may miss major deadlines, negatively impact customer satisfaction, miss out on revenue opportunities, and in some cases, incur significant financial penalties.

What are some of the benefits that your customers are experiencing?

Our customers are able to automate a manual and time-intensive metrics process and collect near real-time data in a common platform that can be used across the organization. All of this leads to more efficient decision-making and a coordinated communication effort.

Customers are also able to identify inaccurate or duplicate data that may be contributing to slow performance in their Warehouse and Labor Management software. Rebus provides an immediate way to identify data issues and improve overall performance. This is a huge benefit for customers who are shipping thousands of orders every week.

Why did you decide to use MongoDB?

Four years ago, when we first came up with the idea for Rebus, we gathered a group of employees to brainstorm the best way to build it.

In that brainstorm, one of our employees suggested that we use MongoDB as the underlying datastore. After doing some research, it was clear that the document model was a good match for Rebus. It would allow us to gather, store, and build analytics around a lot of disparate data in close to real time. We decided to build our application on MongoDB Enterprise Advanced.

When and why did you decide to move to MongoDB Atlas?

We first heard about MongoDB Atlas in July 2016 shortly after it launched, but were not able to migrate right away. We maintain strict requirements around compliance and data management, so it was not until May 2017, when MongoDB Atlas became SOC2 compliant, that we decided to migrate. Handing off our database management to the team that builds MongoDB gave us peace of mind and has helped us stay efficient and agile. We wanted to ensure that our team could remain focused on the application and not have to worry about the underlying infrastructure. Atlas allowed us to do just that.

The migration wasn’t hard. We were moving half a terabyte of data into Atlas, which took a couple of goes — the first time didn’t take. But the support team was proactive. After working with us to pinpoint the issue, one of our key technical people reconfigured an option and the process re-ran without any issues. We hit our deadline.

Why did you decide to use Atlas on Google Cloud Platform (GCP)?

Google Cloud Platform is SOC2 compliant and allows us to keep our team highly efficient and focused on developing the application instead of managing the back end. Additionally, GCP gave us great responses that we weren’t getting from other cloud vendors.

How has your experience been so far?

MongoDB Atlas has been fantastic for us. In particular, the real-time performance panel is fantastic, allowing us to see what is going on in our cluster as it’s happening.

In comparison to other databases, both NoSQL and SQL, MongoDB provides huge benefits. Despite the fact that many of our developers have worked with relational databases their entire careers, the way we can get data out of MongoDB is unparalleled to anything they’ve ever seen. That’s even with a smaller, more efficient footprint on our system.

Additionally, the speed of MongoDB has been really helpful. We’re still looking at the results from our load tests, but the ratio of timeouts to successes was very low. Atlas outperforms what we were doing before. We know we can support at least a couple hundred users at one time. That tells us we will be able to go and grow with MongoDB Atlas for years to come.

Thank you for your time Alex.


[1] Grand View Research, Supply Chain Analytics Market Analysis, 2014 - 2025, https://www.grandviewresearch.com/industry-analysis/the-global-supply-chain-analytics-market

Rebus is a trademark of Longbow Advantage Inc.

16 Cities in 5 Months: The MongoDB team is coming to an AWS Summit near you

As our community of users continues to grow and become more diverse, we want to ensure all of our customers are fully equipped to be successful on MongoDB Atlas. To that end, we have partnered with AWS, committing to 16 of their regional Summits. These 16 events span 13 different countries and expect to draw thousands of members of the AWS and MongoDB communities.

Powering an online community of coders with MongoDB Atlas

This is a guest post by Linda Peng (creator of CodeBuddies) and Dhaval Tanna (core contributor).

If you’re learning to code, or if you already have coding experience, it helps to have other people around -- like mentors, coworkers, hackathon buddies and study partners -- to help accelerate your learning, especially when you get stuck.

But not everyone can commute to a tech meetup, or lives in a city with access to a network of study partners or mentors/coworkers who can help them.

CodeBuddies started in 2014 as a free virtual space for independent code learners to share knowledge and help each other learn. It is fully remote and 100% volunteer-driven, and helps those who — due to geography, schedule or personal responsibilities — might not be able to easily attend in-person tech meetups and workshops/hackathons where they could find study partners and mentors.

The community is now comprised of a mix of experienced software engineers and beginning coders from countries around the world, who share advice and knowledge in a friendly Slack community. Members also use the website at codebuddies.org to start study groups and schedule virtual hangouts. We have a pay-it-forward mentality.

The platform, an open-sourced project, was painstakingly built by volunteer contributors to help members organize study groups and schedule focused hangouts to learn together. In those peer-to-peer organized remote hangouts, the scheduler of the hangout might invite others to join them in:

  • Working through a coding exercise together
  • Screen sharing and helping each other through a contribution to an open-sourced project
  • Co-working silently in a “silent” hangout (peer motivation)
  • Helping them practice their knowledge of a topic by attempting to teach it
  • Reading through a chapter of a programming tutorial together

Occasionally, the experience will be magical: a single hangout on a popular framework might have participants joining in at the same time from Australia, the U.S., Finland, Hong Kong, and Nigeria.

The site uses the MeteorJS framework, and the data is stored in a MongoDB database.

For years, with a zero budget, CodeBuddies was hosted on a sandbox instance from mLab. When we had the opportunity to migrate to MongoDB Atlas, our database was small enough that we didn’t need to use live migration (which requires a paid mLab plan), but could migrate it manually. These are the three easy steps we took to complete the migration:

1) Dump the mongo database to a local folder

Once you have stopped application writes to your old database, run:

mongodump -h ds015995.mlab.com --port 15992 --db production-database -u username -p password -o Downloads/dump/production-database
 

2) Create a new cluster on MongoDB Atlas

 

3) Use mongorestore to populate the dumped DB into the MongoDB Atlas cluster

  First, whitelist your droplet IP on MongoDB Atlas:



Then you can restore the mlab dump you have in a local folder to MongoDB Atlas:

mongorestore --host my-awesome-cluster-shard-00-00-dpkz5.mongodb.net --port 27018 --authenticationDatabase admin --ssl  -u username -p password Downloads/dump/production-database
---

We host our app on DigitalOcean, and use Phusion Passenger to manage our app. When we were ready to make the switchover, we stopped Phusion Passenger, added our MongoDB connection string to our nginx config file, and then restarted Phusion Passenger.

---

CodeBuddies is a small project now, but we do not want to be unprepared when the community grows. We chose MongoDB Atlas for its mature performance monitoring tools, professional support, and easy scaling.