GIANT Stories at MongoDB

Future Facilities Triples the Speed of Development with MongoDB

Future Facilities is an OEM partner of MongoDB that helps engineers and IT professionals use virtual prototyping to better plan IT deployments within data centers. By leveraging Computational Fluid Dynamics (CFD) simulation, users can test what-if scenarios unique to their facilities. Their web-based platform was originally built on MySQL, but the team quickly realized that the database couldn’t scale to meet their needs.

Instead, Future Facilities chose to migrate to MongoDB Enterprise Advanced. We sat down with Akhil Docca, Corporate Marketing & Product Strategy Manager of Future Facilities, to learn how migrating to MongoDB helped to triple the speed of development.

----

Can you tell us a little bit about yourself and Future Facilities?

I lead the marketing and product strategy here at Future Facilities. We provide software and services specifically focused on physical infrastructure design and management to customers in the data center market. Our solutions span the entire data center ecosystem, from design to operations. By utilizing a digital clone that we call the Virtual Facility (VF), our users can see the impact of any change like adding new capacity, upgrading equipment, etc., before it is implemented.

In 2004 we released 6SigmaRoom, the data center industry’s leading CFD software for data centers. 6SigmaRoom is how our users create a VF, where they can input live data from their facility, and include necessary objects such as cooling and power units, servers and racks. Having this digital twin allows engineers to troubleshoot, predict and analyze the impact of any deployment plan, and find the optimal method for implementation. With 6SigmaRoom, engineers can speed up capacity planning and improve the overall efficiency and resilience of their data center.

6SigmaRoom is essential for accurate data center capacity planning, however, it’s a heavy-duty desktop application developed for engineers. We wanted to create a product that Facilities and IT teams could use to improve both their processes and overall data center performance. In 2016 we launched a new product, 6SigmaAccess, to do just that.

6SigmaAccess is a multi-user, browser-based software platform that allows IT professionals to interact with their data center model and propose changes through a central management system. The browser-based architecture allows us to load up a lighter version of the 3D model specifically tailored to the IT capacity planning process.

Here’s how it works. IT planners propose changes such as adding new IT or racks, decommissioning equipment or cabinets, or simply editing attributes. These changes are then submitted and queued up via MongoDB. When the data center engineer opens up 6SigmaRoom, the proposed changes are automatically merged, allowing the engineer to simply run the simulation to see how the changes would affect the facility. If the analysis reveals that the proposed installations don’t impact performance, they can then be approved, merged back into the database and scheduled for deployment

MongoDB is the integration layer between 6SigmaAccess and 6SigmaRoom that makes this process possible.

What were you using before MongoDB?

We initially started building on MySQL, but quickly ran into challenges. Whenever we wanted to make an update to the database schema, there would be a huge demand on time and resources from our developers, DBAs, and ops teams. It quickly became apparent that we wouldn’t be able to scale to meet the needs of our customers. While redesigning the platform, we knew that we had to get away from the rigid architecture of a SQL tabular database.

Our goal was to find a data platform that was easy to work with, that developers would like, and that could scale as our business grew. After briefly considering Cassandra and CouchDB, we selected MongoDB for its strong community ecosystem, which made adopting the technology seamless. MongoDB allows us to focus on delivering new features instead of having to worry about managing the database. We are able to code, test and deliver incremental changes to 6SigmaAccess without having to change 6SigmaRoom. This will shorten our development cycles by 66%, from 9 to 3 months.

Can you describe your MongoDB deployment?

The key components of 6SigmaAccess are node.js, angular.js, JSON, and RESTful APIs. 6SigmaRoom is built on C++. We are currently deploying a 3-node cluster to our enterprise customers.

Our technology is built in a way that we aren’t always writing massive amounts of data to the database. 6SigmaAccess changes tend to be a few MBs at a time. 6SigmaRoom data files tend to be in the 100s of GB range, but we only write the data into the database based on a user action. The typical (minimum) server configuration that we’ve sized for our applications are: 4-16 Cores, 64 GB of RAM & 1 TB of disk space.

We are Windows Active Directory compliant and have additional access controls built into our software that enforces roles and permissions when connecting to the database.

What advice would you give someone who is considering using MongoDB for their next project?

Start early and incorporate MongoDB in your project from the beginning. Redundancy and scalability are important at the heart of any application and planning how to achieve those goals from the onset will make development much smoother down the road. Additionally, choose a vendor with a strong support team. We were extremely impressed with MongoDB’s sales and technical team prowess throughout the conversion process, and look forward to working with them in the future.

Longbow Advantage - Helping companies move beyond the spreadsheet for a real-time view of logistics operations

The global market in supply chain analytics is estimated at some $2.7 billion[1] — and yet, far too often supply chain leaders use spreadsheets to manage their operation, limiting the real-time visibility into their systems.

Longbow Advantage, a supply chain partner, helps companies get the maximum ROI from their supply chain software products. Moving beyond the spreadsheet and generic enterprise BI tools, Longbow developed an application called Rebus™ which allows users to harness the power of smart data and get real-time visibility into their entire supply chain. That means ingesting data in many formats from a wide range of systems, storing it for efficient reference, and presenting it as needed to users — at scale.

MongoDB Atlas is at the heart of Rebus. We talked to Alex Wakefield, Chief Commercial Officer, to find out why they chose to trust such a critical part of their business to MongoDB and how it’s panned out both technically and commercially.

---

Tell us a little bit about Longbow Advantage. How did you come up with the idea?

Sixteen years ago our Founder, Gerry Brady, left his job at a distribution company to build Longbow Advantage. The goal was to build a company that could help streamline warehouse and workforce management implementations, upgrades, and integrations, and put more focus on customer experience and success.

Companies of all sizes have greatly improved distribution processes but still lack real-time visibility into their systems. While there’s a desire to use BI/analytics systems, automate manual processes, and work with information in as close to real-time as possible, most companies continue to rely on manually generated spreadsheets to measure their logistics KPIs, slowing down speed to insights.

There had to be a better way to help companies address this problem. We built an application called Rebus. This SaaS-based analytics platform, used by industry leaders such as Del Monte Foods and Subaru of America, aggregates and harmonizes logistics data from any supply chain execution software to provide a near real-time view of logistics operations and deliver cross-functional insights. The idea is quite simply to provide more accurate data in as close to real-time as technically possible within a common platform that can be shared across the supply chain.

For example, one company may have a KPI around labor productivity. When that company receives a customer order to ship, there is a lot of information they want to know:

  • Was the order shipped and on-time?
  • How efficiently is the labor staff filling orders?
  • How many orders are processing?
  • How many individual lines or tasks on the order are being filled?

The list goes on. With Rebus, manufacturers, retailers and distributors can segment different business lines like ecommerce, traditional retail, direct to consumer and more, to ensure that they are being productive and meeting the appropriate deadlines. Without this information, a company may miss major deadlines, negatively impact customer satisfaction, miss out on revenue opportunities, and in some cases, incur significant financial penalties.

What are some of the benefits that your customers are experiencing?

Our customers are able to automate a manual and time-intensive metrics process and collect near real-time data in a common platform that can be used across the organization. All of this leads to more efficient decision-making and a coordinated communication effort.

Customers are also able to identify inaccurate or duplicate data that may be contributing to slow performance in their Warehouse and Labor Management software. Rebus provides an immediate way to identify data issues and improve overall performance. This is a huge benefit for customers who are shipping thousands of orders every week.

Why did you decide to use MongoDB?

Four years ago, when we first came up with the idea for Rebus, we gathered a group of employees to brainstorm the best way to build it.

In that brainstorm, one of our employees suggested that we use MongoDB as the underlying datastore. After doing some research, it was clear that the document model was a good match for Rebus. It would allow us to gather, store, and build analytics around a lot of disparate data in close to real time. We decided to build our application on MongoDB Enterprise Advanced.

When and why did you decide to move to MongoDB Atlas?

We first heard about MongoDB Atlas in July 2016 shortly after it launched, but were not able to migrate right away. We maintain strict requirements around compliance and data management, so it was not until May 2017, when MongoDB Atlas became SOC2 compliant, that we decided to migrate. Handing off our database management to the team that builds MongoDB gave us peace of mind and has helped us stay efficient and agile. We wanted to ensure that our team could remain focused on the application and not have to worry about the underlying infrastructure. Atlas allowed us to do just that.

The migration wasn’t hard. We were moving half a terabyte of data into Atlas, which took a couple of goes — the first time didn’t take. But the support team was proactive. After working with us to pinpoint the issue, one of our key technical people reconfigured an option and the process re-ran without any issues. We hit our deadline.

Why did you decide to use Atlas on Google Cloud Platform (GCP)?

Google Cloud Platform is SOC2 compliant and allows us to keep our team highly efficient and focused on developing the application instead of managing the back end. Additionally, GCP gave us great responses that we weren’t getting from other cloud vendors.

How has your experience been so far?

MongoDB Atlas has been fantastic for us. In particular, the real-time performance panel is fantastic, allowing us to see what is going on in our cluster as it’s happening.

In comparison to other databases, both NoSQL and SQL, MongoDB provides huge benefits. Despite the fact that many of our developers have worked with relational databases their entire careers, the way we can get data out of MongoDB is unparalleled to anything they’ve ever seen. That’s even with a smaller, more efficient footprint on our system.

Additionally, the speed of MongoDB has been really helpful. We’re still looking at the results from our load tests, but the ratio of timeouts to successes was very low. Atlas outperforms what we were doing before. We know we can support at least a couple hundred users at one time. That tells us we will be able to go and grow with MongoDB Atlas for years to come.

Thank you for your time Alex.


[1] Grand View Research, Supply Chain Analytics Market Analysis, 2014 - 2025, https://www.grandviewresearch.com/industry-analysis/the-global-supply-chain-analytics-market

Rebus is a trademark of Longbow Advantage Inc.

How Kustomer uses MongoDB and AWS to help fill in the gaps in the customer journey

Kustomer is a SaaS-based customer relationship platform designed to integrate conversations, transactions, and a company's proprietary data in one single system, capturing all aspects of the customer journey. We sat down with Jeremy Suriel, CTO & Co-Founder of Kustomer, to learn more.

Tell us about Kustomer

My co-founder and I worked together for 20 years in customer support. Over time, we’ve seen major changes in the industry - social media gave consumers a voice, users started communicating through text, mobile computing took off - and companies weren’t listening to their customers through these new channels.

Recognizing these changes, Kustomer was launched in 2015 as a CRM platform to improve the customer experience. Our goal is to help companies compile customer information into one place, automate business processes, address the pain points behind customer support systems, and enable users to make smarter, data driven decisions.

What are you building with MongoDB?

We are building an application that allows Kustomer users to get a complete picture of their customer’s activity from the first interaction through the entire journey. This insight allows customer support representatives to provide a better, more personalized experience to the end user. With Kustomer, users are able to combine conversations, custom objects, and track events in an easy-to-use interface. They are able to collect historical data behind every account from every channel, get insight into the customer sentiment, and more.

We could have chosen any data storage engine for this application. We briefly considered MySQL, Postgres, and DynamoDB, however, when compared to the alternatives, MongoDB was the stand out in two key areas. First, we needed to store complicated data in a simple way. MongoDB’s flexible data model allowed us to have independent tenants in our platform with the ability for each customer to define the structure of their data based on their specific requirements. Relational data stores didn’t give us this option and DynamoDB lacked some key features and flexibility like easily adding secondary compound indexes to an existing data model.

Second, we decided early on that we would be a JavaScript shop, specifically Node.js on the backend and React.js on the frontend. From a hiring perspective, we found that Node.js engineers have a lot of familiarity with MongoDB. Building our platform on MongoDB helps us get access to the top talent with the relevant set of expertise, and allow us to build our application quickly and efficiently.

We were also excited to leverage MongoDB’s WiredTiger storage engine with improved performance and concurrency. Overall, MongoDB was a no-brainer for us.

Please describe your application stack. What technologies or services are you using?

We have a microservice-based architecture with MongoDB as the primary database storing the majority of our data. Our infrastructure is running in AWS where we follow standard best practices.

  • Services are continuously deployed with zero-downtime from CircleCI to Amazon Elastic Container Service (ECS) running our docker-based microservice containers.
  • All services running with an AWS VPC, Multi-AZ for high availability with auto-scaling and traffic distributed through AWS ELB/ALBs.
  • API gateways sit in front of all our microservices, handling authentication, authorization, and auditing.
  • Customer Search & Segmentation, which is a core functionality of our platform, is powered by Elasticsearch.
  • We rely on AWS Kinesis Data Streams to collect and process events.
  • We use AWS Lambda functions to help customers populate AWS Redshift and create real-time dashboards. We’re also developing a Snowflake integration for other analytics use cases.
  • Finally, we use Terraform to automatically configure our cloud-based dev, qa, staging, and production environments.

We leverage MongoDB Enterprise Advanced for ongoing support and for the additional software that helps us with database operations. For example, we use the included Cloud Manager product to manage our database backups. The tool helps us upgrade our clusters, connect our alerts to Slack, and more. Our favorite feature of MongoDB Cloud Manager is the profiling/metrics dashboard that allows us to see everything that is happening within our deployment at all times and perform very specific queries to get greater insights into performance.

How is MongoDB performing for you?

MongoDB continues to perform well as our application and usage grows. We now have 1-4 millisecond reads and sub-millisecond writes. Our data volume has grown 80% since last quarter and we currently have 30+ MongoDB databases with well over 100 collections. We may explore sharding one or more of our services’ MongoDB collections and/or migrating to MongoDB Atlas in the future.

Overall we’ve experienced great benefits with MongoDB. We have great response times, are able to get the talent we need, are easily able to personalize our product to our customers’ needs, and more. Our company would not be where we are today if we had based our application on any other database.



Nominations Now Open for the 2018 MongoDB Innovation Awards

Marissa Chieco
February 02, 2018
Events

Nominations for the fifth annual MongoDB Innovation Awards are open! These awards recognize some of the most innovative organizations and individuals across a number of different industries that are building something giant with MongoDB.

All Innovation Award winners will receive complimentary passes to MongoDB World happening in NYC on June 26-27, access to the MongoDB World VIP party, inclusion in a press release and blog post, and more.

Past recipients include Barclays, Cisco, Experian Health, HSBC, Infosys, and InVision. Read more about last year’s winners here.

Nominate yourself, your company, a colleague, a partner, or anyone who is building something interesting with MongoDB. Nominations close March 15th, and winners will be notified in April.

Submit Now

Visit the MongoDB World page for more information. See you at MongoDB World 2018!

How SteppeChange used MongoDB Zones to build a Global Mobile Customer Engagement Platform for 220 million users in the hybrid cloud

Using MongoDB, SteppeChange was able to shave approximately six months off of the development schedule of their application.

SteppeChange is a big data analytics technology firm that designs and implements client-tailored, fast-to-market data science and technology solutions. They work with clients around the world to find innovative answers to challenging problems and allocate analytical effort where it will create the most value.

Gregory Rayzman, CTO and Chief Data Architect at SteppeChange shares how and why the company relies on MongoDB for a variety of solutions, including an extendable mobile customer engagement platform for 220 million global users.


A Complex Task

A global technology company hired us to build a mobile customer engagement platform to be used by mobile operators around the world. The second we were assigned the project, we knew we were up for a challenge as different countries have vastly different data management laws. While one country might require that all data be encrypted at rest, another might require that all data is stored within their country boundaries.

Our goal was to build a platform with a single code-base, all while balancing multiple data management requirements and meeting the needs of an expected user base of 220 million subscribers globally.

Design Options

Finding a system that would meet varying data management requirements governed by multiple countries was of utmost importance when evaluating database options. After surveying a number of different options including relational players like MySQL and PostgreSQL, and NoSQL players like Cassandra and Couchbase, we quickly realized that MongoDB Enterprise Advanced provided the flexibility, scalability, and agility we needed.

The Zones feature in MongoDB is critical to our application. With it, we can break data from MongoDB collections into multiple shards and assign each shard to a zone associated with a specific geographic location. Zones are part of the same cluster and can be queried globally, but the data resides at sovereign locations where local laws prevail. Not only is latency reduced with MongoDB Zones, but we are also able to scale and grow each zone independently of others.

MongoDB Cloud Manager was also a major asset in setting up and monitoring our MongoDB deployment. It allows us to visualize the ongoing state and status of all systems, troubleshoot issues, and easily perform point-in-time restores.

Our Solution

With MongoDB Zones, we separate user data for regulatory purposes and keep it under local jurisdiction. More specifically, user data resides in data centers physically located in the appropriate country, so that the application’s access to user data complies with local regulation boundaries.

We designed and set up a multi-sharded MongoDB cluster consisting of three Zones. Each Shard has three voting replicas, in addition to hidden non-voting replicas for reporting purposes, allowing the system to spread the load based on node functionality. We do this so the data pertinent to a specific jurisdiction is deployed at data centers inside the respective jurisdictional boundaries, while data that is not subject to the same regulations is deployed on AWS.

For MongoDB Zones deployed on AWS, we distribute replica set nodes across multiple AWS Availability Zones (AZs) to increase application availability and protect against AWS outages. In addition, we leverage a similar design for configuration servers — they reside in multiple AZs as well.

To guarantee compliance with security and privacy standards, we also leverage MongoDB’s native encryption. To satisfy regulations around data access, we use the auditing framework to record and log all administrative and non-administrative actions performed against the database.

SteppeChange’s deployment topology diagram

SteppeChange’s deployment topology diagram

Accelerated Delivery with MongoDB

With MongoDB, we were able to quickly bring our application to market by shaving approximately six months off of our development schedule. Our team has been able to take advantage of MongoDB BSON based document storage, Binary Serializable JSON objects. It was a perfect native match to the JSON-based underlying data structure used in our app, which provides an agile approach to adding new features rapidly. We were also able to simplify our data management and remove the complexities of data migration, increasing developer productivity and allowing our engineering team to concentrate on the task at hand.

As we work towards the future, we are looking to expand our use of other features in MongoDB, like the geospatial capabilities — such as geo-fencing and geo-based offer management — and add them to our Mobile Customer Engagement Platform.

Methodist Le Bonheur Healthcare: Transforming Hospital Operations and Improving Patient Care with MongoDB

When it comes to healthcare, the patient experience is always a top priority. Organizations must coordinate a variety of services from room cleaning to patient transport fast and efficiently. Methodist Le Bonheur Healthcare (MLH), the largest healthcare system in Memphis, Tennessee, relies on MongoDB to make this happen.

We connected with David Deas, MLH’s Corporate Director, Innovation and Knowledge Analytics, to learn more about their MongoDB-powered hospitality application.


Can you tell us a little bit about your company?

Methodist Le Bonheur Healthcare (MLH) was founded in 1918 and is the largest health care system in Memphis, consisting of six hospitals and dozens of smaller facilities and clinics, with nearly 14,000 employees. Our biggest hospital has over 400 beds and operates at near-capacity on a regular basis.

Why did you start using MongoDB?

Coordinating hospitality on a continuous schedule requires dozens of people to ensure the patient remains as comfortable as possible during their stay. Historically, we have been using a legacy, relational database system to manage the flow of patients in, out, and around the hospital, and it was on its last legs. When the time came for a new license and a software upgrade, we were hit with a $1.6 million price tag, which gave hospital administrators some pause.

Luckily, our Process Improvement and Innovation team had a plan – build rather than buy. Armed with MongoDB Enterprise Advanced, the Meteor JavaScript web framework, and a few good ideas, they set to work on a new real-time system, called Melodi Flow. Melodi manages room reservations, room cleaning and turnover, and patient transport - all key components in efficiently managing the flow of patients through a hospital stay.

Can you explain how Melodi Flow works and how it uses MongoDB?

Melodi Flow gives the central dispatch office a real-time master view of every room, along with the status of room cleaners and patient transporters. Unlike with our legacy relational system, using MongoDB ensured that there was no delay between status changes, requests, and raised issues. And those antiquated pagers carried around by room cleaners and patient transporters? Replaced by brand new iOS devices with custom mobile apps to easily manage the daily workflow.

The real-time capabilities of the new Melodi Flow system are powered by reading updates directly from MongoDB's replica set oplog by connected browsers and mobile clients. The real-time aspect of the new system is a huge win, but an additional, not-so-obvious differentiator is the mountain of data being collected each and every day. Every interaction, from acknowledging reservations, to a completed room cleaning, to fetching a patient for daily treatments is recorded for later analysis.

Patients awaiting transport, and transporters awaiting assignments.

We are confident in MongoDB Enterprise Advanced. Replica sets give us peace of mind for resilience and performance, and backups are a breeze. Since we designed our schema with reporting in mind, we can spend less time worrying about complicated JOINs and sub-queries, and more time analyzing our data. After only a few weeks of use, we’ve already examined patient complaints, identified peaks, valleys, and bottlenecks in patient flow throughout the day, and created the beginnings for benchmarking employee performance.

What’s next for Melodi Flow?

Our future plans include:

  • The creation of single-view dashboards for upper management to get a snapshot view of the facility at any given time (in comparison to recent history)
  • More advanced aggregations and analytics to find hidden efficiencies
  • Utilizing the MongoDB Connector for BI with Tableau for rich reporting and analysis
  • Scaling out the software to the remaining 5 hospitals in the Methodist Le Bonheur system
  • Coalescing data from our Electronic Medical Records (EMR) system to give better real-time views into the current state of the hospitals and their patients

MongoDB has been a crucial tool in managing one of the busiest hospitals in Memphis. Its utility will only grow as the Process Improvement & Innovation team continues delivering solutions for improving patient care.

Access the MongoDB Hub for Healthcare

Designing the Perfect Prototyping, Workflow and Collaboration Platform in the Cloud

InVision was recognized at MongoDB World 2017 as the winner of the MongoDB Innovation Award: Atlas category. We had the opportunity to sit down with Dana Lawson, InVision’s VP of Platform Engineering to learn more about their innovation.

Tell us about InVision

InVision is the world’s leading design collaboration platform. Helping companies like IBM, Airbnb, Visa, Netflix and Evernote unlock the power behind design-driven product development, InVision makes it easy for teams to prototype, manage their workflow, and control their entire design process all in one place.

The goal of InVision has always been to create a highly collaborative design platform in the cloud that would allow people around the world to have access to design, review, and user test products—all without a single line of code.

Why did you build InVision on MongoDB?

When originally building InVision, we looked to MongoDB right out of the gate because of its uptime and scalability. We needed to be able to provide our clients with a platform that is as reliable as we are.

In addition, MongoDB helps us easily build new features. You can imagine that with designs, requirements are very fluid and having a restrictive data model is a limiting factor. MongoDB’s document data model has help us innovate quickly. An example of a project that uses MongoDB is Inspect. For designers that build in Sketch, they can send their design to Inspect, which breaks down the Sketch file into different layers and allows front-end developers to get assets, CSS tags, and exact pixel dimensions for how the designs should actually look on a live site.

When we started with MongoDB, we were using Chef to automate AWS instances and the database. We had 28 replica sets spread across 4 different environments, some of them in different AWS accounts.

While we were having success, once MongoDB Atlas was released, we immediately took the leap. Moving to a database-as-a-service offering increased our team’s productivity as we were able to focus on our product, rather than managing the infrastructure. As a turnkey cloud database, MongoDB Atlas provides us with the flexibility needed to develop in a secure, robust environment while our customers continue to have access to a highly collaborative design platform in the cloud.

How do you use MongoDB Atlas and what impact does it have on your day to day?

We use MongoDB Atlas on the AWS cloud because of the ease of use and flexibility of the cloud platform. Together we are able to build a multi-tenant solution for Javascript Object Notation (JSON) messages received. Essentially, we use MongoDB as a transactional data store for any data, like our design artifacts, that benefit from not having a pre-defined schema.

The design data for a prototype can have any number of layers and graphics. Anyone who has seen or worked with a design tool will have familiarity with layers and the layers pane where objects can be nested, grouped, and inherited. MongoDB's flexible schema allows our backend services to store the data model for a prototype with minimal effort.

The most common queries we run are simple .find()s on indexed fields. At our peak, we read thousands of documents per second; we’re able to retrieve documents usually within 2 milliseconds.

By integrating Atlas with all of our provisioning, we have avoided the dreaded DevOps bottleneck. Without training, our engineers can “self-serve” by simply going to the InVision platform and define the instance they need; MongoDB Atlas configures it automatically and they’re ready to go.

What does your technology stack look like?

Our technology stack is primarily focused on Node and Google Go, allowing us to run a microservices architecture to create independent feature sets on data stores and significantly reduce dependencies. These reductions allow us to quickly spin up data stores and automatically add clusters as needed. Right now we use Kubernetes with MongoDB Atlas, though we are moving everything over to Atlas so we do not have to worry about uptime, EC2 clusters, anything. We’ve incorporated the MongoDB Atlas REST API into our Ansible scripts, which makes it incredibly easy for us to add new replica sets and users for different environments. We simply call out to the API and MongoDB Atlas spins up the replica sets so we don’t have to.

Some other technologies we use include AWS Lambda for short-term execution, Amazon SNS and Rabbit MQ for messaging and mobile notifications, and Amazon S3 for storing assets.

To learn more about InVision, watch their talk from MongoDB World 2017 here.