Future Facilities Triples the Speed of Development with MongoDB
Future Facilities is an OEM partner of MongoDB that helps engineers and IT professionals use virtual prototyping to better plan IT deployments within data centers. By leveraging Computational Fluid Dynamics (CFD) simulation, users can test what-if scenarios unique to their facilities. Their web-based platform was originally built on MySQL, but the team quickly realized that the database couldn’t scale to meet their needs.
Instead, Future Facilities chose to migrate to MongoDB Enterprise Advanced. We sat down with Akhil Docca, Corporate Marketing & Product Strategy Manager of Future Facilities, to learn how migrating to MongoDB helped to triple the speed of development.
Can you tell us a little bit about yourself and Future Facilities?
I lead the marketing and product strategy here at Future Facilities. We provide software and services specifically focused on physical infrastructure design and management to customers in the data center market. Our solutions span the entire data center ecosystem, from design to operations. By utilizing a digital clone that we call the Virtual Facility (VF), our users can see the impact of any change like adding new capacity, upgrading equipment, etc., before it is implemented.
In 2004 we released 6SigmaRoom, the data center industry’s leading CFD software for data centers. 6SigmaRoom is how our users create a VF, where they can input live data from their facility, and include necessary objects such as cooling and power units, servers and racks. Having this digital twin allows engineers to troubleshoot, predict and analyze the impact of any deployment plan, and find the optimal method for implementation. With 6SigmaRoom, engineers can speed up capacity planning and improve the overall efficiency and resilience of their data center.
6SigmaRoom is essential for accurate data center capacity planning, however, it’s a heavy-duty desktop application developed for engineers. We wanted to create a product that Facilities and IT teams could use to improve both their processes and overall data center performance. In 2016 we launched a new product, 6SigmaAccess, to do just that.
6SigmaAccess is a multi-user, browser-based software platform that allows IT professionals to interact with their data center model and propose changes through a central management system. The browser-based architecture allows us to load up a lighter version of the 3D model specifically tailored to the IT capacity planning process.
Here’s how it works. IT planners propose changes such as adding new IT or racks, decommissioning equipment or cabinets, or simply editing attributes. These changes are then submitted and queued up via MongoDB. When the data center engineer opens up 6SigmaRoom, the proposed changes are automatically merged, allowing the engineer to simply run the simulation to see how the changes would affect the facility. If the analysis reveals that the proposed installations don’t impact performance, they can then be approved, merged back into the database and scheduled for deployment
MongoDB is the integration layer between 6SigmaAccess and 6SigmaRoom that makes this process possible.
What were you using before MongoDB?
We initially started building on MySQL, but quickly ran into challenges. Whenever we wanted to make an update to the database schema, there would be a huge demand on time and resources from our developers, DBAs, and ops teams. It quickly became apparent that we wouldn’t be able to scale to meet the needs of our customers. While redesigning the platform, we knew that we had to get away from the rigid architecture of a SQL tabular database.
Our goal was to find a data platform that was easy to work with, that developers would like, and that could scale as our business grew. After briefly considering Cassandra and CouchDB, we selected MongoDB for its strong community ecosystem, which made adopting the technology seamless. MongoDB allows us to focus on delivering new features instead of having to worry about managing the database. We are able to code, test and deliver incremental changes to 6SigmaAccess without having to change 6SigmaRoom. This will shorten our development cycles by 66%, from 9 to 3 months.
Can you describe your MongoDB deployment?
The key components of 6SigmaAccess are node.js, angular.js, JSON, and RESTful APIs. 6SigmaRoom is built on C++. We are currently deploying a 3-node cluster to our enterprise customers.
Our technology is built in a way that we aren’t always writing massive amounts of data to the database. 6SigmaAccess changes tend to be a few MBs at a time. 6SigmaRoom data files tend to be in the 100s of GB range, but we only write the data into the database based on a user action. The typical (minimum) server configuration that we’ve sized for our applications are: 4-16 Cores, 64 GB of RAM & 1 TB of disk space.
We are Windows Active Directory compliant and have additional access controls built into our software that enforces roles and permissions when connecting to the database.
What advice would you give someone who is considering using MongoDB for their next project?
Start early and incorporate MongoDB in your project from the beginning. Redundancy and scalability are important at the heart of any application and planning how to achieve those goals from the onset will make development much smoother down the road. Additionally, choose a vendor with a strong support team. We were extremely impressed with MongoDB’s sales and technical team prowess throughout the conversion process, and look forward to working with them in the future.
STREAM: How MongoDB Atlas and AWS help make it easier to build, scale, and personalize feeds that reach millions of users
Stream is a platform designed for building, personalizing, and scaling activity feeds that reach over 200 million users. We offer an alternative to building app feed functionality from scratch by simplifying implementation and maintenance so companies can stay focused on what makes their products unique.
Today our feed-as-a-service platform helps personalize user experiences for some of the most engaging applications and websites. For example, Product Hunt, which surfaces new products daily and allows enthusiasts to share and geek out about the latest mobile apps, websites, and tech creations, uses our API to do so.
We’ve recently been working on an application called Winds, an open source RSS and podcast application powered by Stream, that provides a new and personalized way to listen, read, and share content.
We chose MongoDB to support the first iteration of Winds as our developers found the database very easy to work with. I personally feel that the mix of data model flexibility, scalability, and rich functionality that you get with MongoDB makes it superior to what you would get out of the box with other NoSQL databases or tabular databases such as MySQL and PostgreSQL.
Our initial MongoDB deployment was managed by a vendor called Compose but that ultimately didn’t work out due to issues with availability and cost. We migrated off Compose and built our own self-managed deployment on AWS. When MongoDB’s own database as a service, MongoDB Atlas, was introduced to us, we were very interested. We wanted to reduce the operational work that our team was doing and found Atlas’s pricing much more predictable than what we had experienced with our previous MongoDB service provider. We also needed a database service that would be highly available out of the box. The fact that MongoDB Atlas sets a minimum replica set member count and automatically distributes each cluster across AWS availability zones had us sold.
The great thing about managing or scaling MongoDB with MongoDB Atlas is that pretty much almost all of the time, we don’t have to worry about it. We run our application on a deployment using the M30 size instances with the auto-expanding storage option enabled. When our disk utilization approaches 90%, Atlas automatically provisions us more with no impact to availability. And if we experience spikes in traffic like we have in the past, we can easily scale up or out using MongoDB Atlas by either clicking a few buttons in the UI or triggering a scaling event using the API.
Another benefit that MongoDB Atlas has provided us is on the cost savings side. With Atlas, we no longer need a dedicated person to worry about operations or maintaining uptime. Instead, that person can work on the projects that we’d rather have them working on. In addition, our team is able to move much faster. Not only can we make changes on the fly to our application leveraging MongoDB’s flexible data model, but we can deploy any downstream database changes on the fly or easily spin up new clusters to test new ideas. All of these can happen without impacting things in production; no worrying about provisioning infrastructure, setting up backups, monitoring, etc. It’s a real thing of beauty.
In the near future, we plan to look into utilizing change streams from MongoDB 3.6 for our Winds application, which is already undergoing some major upgrades (users can sign up for the beta here). This may eliminate the need to maintain separate Redis instances, which would further increase our savings and reduce architectural complexity.
We’re also looking into migrating more applications onto MongoDB Atlas as its built-in high availability, automation, fully managed backups, and performance optimization tools make it a no-brainer. While there are other MongoDB as a service providers out there (Compose, mLab, etc.) available, no other solution comes close to what MongoDB Atlas can provide.
Interested in reducing costs and faster time to market? Get started today with a free 512 MB database managed by MongoDB Atlas.
Be a part of the largest gathering of the MongoDB community. Join us at MongoDB World.
Longbow Advantage - Helping companies move beyond the spreadsheet for a real-time view of logistics operations
The global market in supply chain analytics is estimated at some $2.7 billion — and yet, far too often supply chain leaders use spreadsheets to manage their operation, limiting the real-time visibility into their systems.
Longbow Advantage, a supply chain partner, helps companies get the maximum ROI from their supply chain software products. Moving beyond the spreadsheet and generic enterprise BI tools, Longbow developed an application called Rebus™ which allows users to harness the power of smart data and get real-time visibility into their entire supply chain. That means ingesting data in many formats from a wide range of systems, storing it for efficient reference, and presenting it as needed to users — at scale.
MongoDB Atlas is at the heart of Rebus. We talked to Alex Wakefield, Chief Commercial Officer, to find out why they chose to trust such a critical part of their business to MongoDB and how it’s panned out both technically and commercially.
Tell us a little bit about Longbow Advantage. How did you come up with the idea?
Sixteen years ago our Founder, Gerry Brady, left his job at a distribution company to build Longbow Advantage. The goal was to build a company that could help streamline warehouse and workforce management implementations, upgrades, and integrations, and put more focus on customer experience and success.
Companies of all sizes have greatly improved distribution processes but still lack real-time visibility into their systems. While there’s a desire to use BI/analytics systems, automate manual processes, and work with information in as close to real-time as possible, most companies continue to rely on manually generated spreadsheets to measure their logistics KPIs, slowing down speed to insights.
There had to be a better way to help companies address this problem. We built an application called Rebus. This SaaS-based analytics platform, used by industry leaders such as Del Monte Foods and Subaru of America, aggregates and harmonizes logistics data from any supply chain execution software to provide a near real-time view of logistics operations and deliver cross-functional insights. The idea is quite simply to provide more accurate data in as close to real-time as technically possible within a common platform that can be shared across the supply chain.
For example, one company may have a KPI around labor productivity. When that company receives a customer order to ship, there is a lot of information they want to know:
- Was the order shipped and on-time?
- How efficiently is the labor staff filling orders?
- How many orders are processing?
- How many individual lines or tasks on the order are being filled?
The list goes on. With Rebus, manufacturers, retailers and distributors can segment different business lines like ecommerce, traditional retail, direct to consumer and more, to ensure that they are being productive and meeting the appropriate deadlines. Without this information, a company may miss major deadlines, negatively impact customer satisfaction, miss out on revenue opportunities, and in some cases, incur significant financial penalties.
What are some of the benefits that your customers are experiencing?
Our customers are able to automate a manual and time-intensive metrics process and collect near real-time data in a common platform that can be used across the organization. All of this leads to more efficient decision-making and a coordinated communication effort.
Customers are also able to identify inaccurate or duplicate data that may be contributing to slow performance in their Warehouse and Labor Management software. Rebus provides an immediate way to identify data issues and improve overall performance. This is a huge benefit for customers who are shipping thousands of orders every week.
Why did you decide to use MongoDB?
Four years ago, when we first came up with the idea for Rebus, we gathered a group of employees to brainstorm the best way to build it.
In that brainstorm, one of our employees suggested that we use MongoDB as the underlying datastore. After doing some research, it was clear that the document model was a good match for Rebus. It would allow us to gather, store, and build analytics around a lot of disparate data in close to real time. We decided to build our application on MongoDB Enterprise Advanced.
When and why did you decide to move to MongoDB Atlas?
We first heard about MongoDB Atlas in July 2016 shortly after it launched, but were not able to migrate right away. We maintain strict requirements around compliance and data management, so it was not until May 2017, when MongoDB Atlas became SOC2 compliant, that we decided to migrate. Handing off our database management to the team that builds MongoDB gave us peace of mind and has helped us stay efficient and agile. We wanted to ensure that our team could remain focused on the application and not have to worry about the underlying infrastructure. Atlas allowed us to do just that.
The migration wasn’t hard. We were moving half a terabyte of data into Atlas, which took a couple of goes — the first time didn’t take. But the support team was proactive. After working with us to pinpoint the issue, one of our key technical people reconfigured an option and the process re-ran without any issues. We hit our deadline.
Why did you decide to use Atlas on Google Cloud Platform (GCP)?
Google Cloud Platform is SOC2 compliant and allows us to keep our team highly efficient and focused on developing the application instead of managing the back end. Additionally, GCP gave us great responses that we weren’t getting from other cloud vendors.
How has your experience been so far?
MongoDB Atlas has been fantastic for us. In particular, the real-time performance panel is fantastic, allowing us to see what is going on in our cluster as it’s happening.
In comparison to other databases, both NoSQL and SQL, MongoDB provides huge benefits. Despite the fact that many of our developers have worked with relational databases their entire careers, the way we can get data out of MongoDB is unparalleled to anything they’ve ever seen. That’s even with a smaller, more efficient footprint on our system.
Additionally, the speed of MongoDB has been really helpful. We’re still looking at the results from our load tests, but the ratio of timeouts to successes was very low. Atlas outperforms what we were doing before. We know we can support at least a couple hundred users at one time. That tells us we will be able to go and grow with MongoDB Atlas for years to come.
Thank you for your time Alex.
 Grand View Research, Supply Chain Analytics Market Analysis, 2014 - 2025, https://www.grandviewresearch.com/industry-analysis/the-global-supply-chain-analytics-market
Rebus is a trademark of Longbow Advantage Inc.
Powering an online community of coders with MongoDB Atlas
If you’re learning to code, or if you already have coding experience, it helps to have other people around -- like mentors, coworkers, hackathon buddies and study partners -- to help accelerate your learning, especially when you get stuck.
But not everyone can commute to a tech meetup, or lives in a city with access to a network of study partners or mentors/coworkers who can help them.
CodeBuddies started in 2014 as a free virtual space for independent code learners to share knowledge and help each other learn. It is fully remote and 100% volunteer-driven, and helps those who — due to geography, schedule or personal responsibilities — might not be able to easily attend in-person tech meetups and workshops/hackathons where they could find study partners and mentors.
The community is now comprised of a mix of experienced software engineers and beginning coders from countries around the world, who share advice and knowledge in a friendly Slack community. Members also use the website at codebuddies.org to start study groups and schedule virtual hangouts. We have a pay-it-forward mentality.
The platform, an open-sourced project, was painstakingly built by volunteer contributors to help members organize study groups and schedule focused hangouts to learn together. In those peer-to-peer organized remote hangouts, the scheduler of the hangout might invite others to join them in:
- Working through a coding exercise together
- Screen sharing and helping each other through a contribution to an open-sourced project
- Co-working silently in a “silent” hangout (peer motivation)
- Helping them practice their knowledge of a topic by attempting to teach it
- Reading through a chapter of a programming tutorial together
Occasionally, the experience will be magical: a single hangout on a popular framework might have participants joining in at the same time from Australia, the U.S., Finland, Hong Kong, and Nigeria.
The site uses the MeteorJS framework, and the data is stored in a MongoDB database.
For years, with a zero budget, CodeBuddies was hosted on a sandbox instance from mLab. When we had the opportunity to migrate to MongoDB Atlas, our database was small enough that we didn’t need to use live migration (which requires a paid mLab plan), but could migrate it manually. These are the three easy steps we took to complete the migration:
1) Dump the mongo database to a local folder
Once you have stopped application writes to your old database, run:
mongodump -h ds015995.mlab.com --port 15992 --db production-database -u username -p password -o Downloads/dump/production-database
2) Create a new cluster on MongoDB Atlas
3) Use mongorestore to populate the dumped DB into the MongoDB Atlas cluster
First, whitelist your droplet IP on MongoDB Atlas:
Then you can restore the mlab dump you have in a local folder to MongoDB Atlas:
mongorestore --host my-awesome-cluster-shard-00-00-dpkz5.mongodb.net --port 27018 --authenticationDatabase admin --ssl -u username -p password Downloads/dump/production-database
We host our app on DigitalOcean, and use Phusion Passenger to manage our app. When we were ready to make the switchover, we stopped Phusion Passenger, added our MongoDB connection string to our nginx config file, and then restarted Phusion Passenger.
CodeBuddies is a small project now, but we do not want to be unprepared when the community grows. We chose MongoDB Atlas for its mature performance monitoring tools, professional support, and easy scaling.
How Kustomer uses MongoDB and AWS to help fill in the gaps in the customer journey
Tell us about Kustomer
My co-founder and I worked together for 20 years in customer support. Over time, we’ve seen major changes in the industry - social media gave consumers a voice, users started communicating through text, mobile computing took off - and companies weren’t listening to their customers through these new channels.
Recognizing these changes, Kustomer was launched in 2015 as a CRM platform to improve the customer experience. Our goal is to help companies compile customer information into one place, automate business processes, address the pain points behind customer support systems, and enable users to make smarter, data driven decisions.
What are you building with MongoDB?
We are building an application that allows Kustomer users to get a complete picture of their customer’s activity from the first interaction through the entire journey. This insight allows customer support representatives to provide a better, more personalized experience to the end user. With Kustomer, users are able to combine conversations, custom objects, and track events in an easy-to-use interface. They are able to collect historical data behind every account from every channel, get insight into the customer sentiment, and more.
We could have chosen any data storage engine for this application. We briefly considered MySQL, Postgres, and DynamoDB, however, when compared to the alternatives, MongoDB was the stand out in two key areas. First, we needed to store complicated data in a simple way. MongoDB’s flexible data model allowed us to have independent tenants in our platform with the ability for each customer to define the structure of their data based on their specific requirements. Relational data stores didn’t give us this option and DynamoDB lacked some key features and flexibility like easily adding secondary compound indexes to an existing data model.
We were also excited to leverage MongoDB’s WiredTiger storage engine with improved performance and concurrency. Overall, MongoDB was a no-brainer for us.
Please describe your application stack. What technologies or services are you using?
We have a microservice-based architecture with MongoDB as the primary database storing the majority of our data. Our infrastructure is running in AWS where we follow standard best practices.
- Services are continuously deployed with zero-downtime from CircleCI to Amazon Elastic Container Service (ECS) running our docker-based microservice containers.
- All services running with an AWS VPC, Multi-AZ for high availability with auto-scaling and traffic distributed through AWS ELB/ALBs.
- API gateways sit in front of all our microservices, handling authentication, authorization, and auditing.
- Customer Search & Segmentation, which is a core functionality of our platform, is powered by Elasticsearch.
- We rely on AWS Kinesis Data Streams to collect and process events.
- We use AWS Lambda functions to help customers populate AWS Redshift and create real-time dashboards. We’re also developing a Snowflake integration for other analytics use cases.
- Finally, we use Terraform to automatically configure our cloud-based dev, qa, staging, and production environments.
We leverage MongoDB Enterprise Advanced for ongoing support and for the additional software that helps us with database operations. For example, we use the included Cloud Manager product to manage our database backups. The tool helps us upgrade our clusters, connect our alerts to Slack, and more. Our favorite feature of MongoDB Cloud Manager is the profiling/metrics dashboard that allows us to see everything that is happening within our deployment at all times and perform very specific queries to get greater insights into performance.
How is MongoDB performing for you?
MongoDB continues to perform well as our application and usage grows. We now have 1-4 millisecond reads and sub-millisecond writes. Our data volume has grown 80% since last quarter and we currently have 30+ MongoDB databases with well over 100 collections. We may explore sharding one or more of our services’ MongoDB collections and/or migrating to MongoDB Atlas in the future.
Overall we’ve experienced great benefits with MongoDB. We have great response times, are able to get the talent we need, are easily able to personalize our product to our customers’ needs, and more. Our company would not be where we are today if we had based our application on any other database.
SEGA HARDlight Migrates to MongoDB Atlas to Simplify Ops and Improve Experience for Millions of Mobile Gamers
It was way back in the summer of ‘91 that Sonic the Hedgehog first chased rings across our 2D screens. Gaming has come a long way since then. From a static TV and console setup in ‘91, to online PC gaming in the noughties and now to mobile and virtual reality. Surprisingly, for most of those 25 years, the underlying infrastructure that powered these games hasn’t really changed much at all. It was all relational databases. But with ever increasing need for scale, flexibility and creativity in games, that’s changing fast. SEGA HARDlight is leading this shift by adopting a DevOps culture and using MongoDB Atlas, the cloud hosted MongoDB service, to deliver the best possible gaming experience.
Bringing Icons to Mobile Games
SEGA HARDlight is a mobile development studio for SEGA, a gaming company you might have heard of. Based in the UK’s Royal Leamington Spa, SEGA HARDlight is well known for bringing the much-loved blue mascot Sonic the Hedgehog to the small screen. Along with a range of Sonic games, HARDlight is also responsible for building and running a number of other iconic titles such as Crazy Taxi: City Rush and Kingdom Conquest: Dark Empire.
Earlier versions of the mobile games such as Sonic Jump and Sonic Dash didn’t require a connection to the internet and had no server functionality. As they were relatively static games, developers initially supported the releases with an in-house tech stack based around Java and MySQL and hosted in SEGA HARDlight’s own data centre.
The standard practice for launching these games involved load testing the servers to the point of breaking, then provisioning the resources to handle an acceptable failure point. This limited application functionality, and could cause service outages when reaching the provisioned resources’ breaking point. As the games started to add more online functionality and increased in popularity, that traditional stack started to creak.
Massive Adoption: Spiky Traffic
Mobile games have an interesting load pattern. People flock in extreme numbers very soon after the release. For the most popular games, this can mean many millions people in just a few days or even hours. The peak is usually short and then quickly drops to a long tail of dedicated players. Provisioning for this kind of traffic with a dynamic game is a major headache. The graph from the Crazy Taxi: City Rush launch in 2014 demonstrates just how spiky the traffic can be.
We spoke with Yordan Gyurchev, Technical Director at SEGA HARDlight, who explained: “With these massive volumes even minor changes in the database have a big impact. To provide a perfect gaming experience developers need to be intimately familiar with the performance trade offs of the database they’re using,”
SEGA HARDlight knew that the games were only going to get more online functionality and generate even more massive bursts of user activity. Much of the gaming data was also account-based so it didn’t fit naturally in the rows and columns of relational databases. In order to address these limitations, the team searched for alternatives. After reviewing Cassandra and Couchbase, but feeling they were either too complex to manage or didn’t have the mature support needed to support the company’s SLAs, the HARDlight engineers looked to MongoDB Atlas, the MongoDB database as a service.
Then came extensive evaluations and testing across multiple dimensions such as cost, maintenance, monitoring and backups. It was well known that MongoDB natively had the scalability and flexibility to handle large volumes and always-on deployments but HARDlight’s team had to have support on the operations side too.
Advanced operational tooling in MongoDB Atlas gave a small DevOps team of just two staffers the ability to handle and run games even as millions of people join the fray. They no longer had to worry about maintenance, upgrades or backups. In fact, one of the clinchers was the point in time backup and restore feature which meant that they can roll back to a checkpoint with the click of a button. With MongoDB Atlas and running on AWS, SEGA HARDlight was ready to take on even Boss Level scaling.
“At HARDlight we’re passionate about finding the right tool for the job. For us we could see that using a horizontally scalable document database was a perfect fit for player-account based games,” said Yordan.
“The ability to create a high traffic volume, highly scalable solution is about knowing the tiny details. To do that, normally engineers need to focus on many different parts of the stack but MongoDB Atlas and MongoDB’s support gives us a considerable shortcut. If this was handled in-house we would only be as good as our database expert. Now we can rely on a wealth of knowledge, expertise and best in class technology.”
HARDlight’s first MongoDB powered game was Kingdom Conquest: Dark Empire which was a frictionless launch from the start and gave the engineers their first experiences of MongoDB. Then in a weekend in late 2017 Sonic Forces: Speed Battle was launched on mobile. It’s a demanding, always-on application that enables constant connection to the internet and shared leaderboards. In the background a 3 shard cluster running on MongoDB Atlas easily scaled to handle the complex loads as millions of gamers joined the race. The database was stable with low latencies and not a single service interruption. All of this resulted in a low stress launch, a happy DevOps team and a very enthusiastic set of gamers.
Yordan concluded: “With MySQL, it had taken multiple game launches to get the database backend right. With MongoDB Atlas, big launches were a success right from the start. That’s no mean feat.”
Just as the gaming platforms have evolved and transformed through the years, so too has the database layer had to grow and adapt. SEGA HARDlight is now expanding its use of MongoDB Atlas to support all new games as they come online. By taking care of the operations, management and scaling, MongoDB Atlas lets HARDlight focus on building and running some of the most iconic games in the world. And doing it with confidence.
Gone is the 90s infrastructure. Replaced by a stack that is every bit as modern, powerful and fast as the famous blue hedgehog.
Start your Atlas journey today for free. What are you waiting for?
How Voya.ai uses MongoDB Atlas to Bring a Seamless Customer Experience to the Business Travel Market
As consumer travel apps like Expedia and Kayak are continuously innovating to provide more seamless booking experiences for their customers, their B2B counterparts can seem very outdated in comparison. Hamburg-based startup, Voya.ai is looking to change that.
We recently sat down with Voya’s CTO, Pepijn Schoen, to learn more about how they are using MongoDB alongside natural language processing and machine learning to bring B2B travel booking into 2018 with their chat-based app.
MongoDB: Tell me about Voya.
Pepijn Schoen: Voya is a purely digital, business travel app that brings the convenience and customer experience of B2C travel booking tools to the B2B market. We use a chat-based, conversational interface to interpret our users’ travel needs and extract the search parameters using natural language processing. We started the company in 2015 and after winning the Best Travel Technology Award in 2016, grew the company to a now 50-people team of travel experts, servicing 150 companies with their business travel needs.
What about the B2B travel booking market are you trying to disrupt?
Most of the tools businesses use today were created 10 to 20 years ago. Since then, companies like Expedia, Kayak, and Booking.com have transformed our expectations of what a travel booking experience should be like. In addition, today’s business travel booking process includes many different layers of vendors. For example, a company may work with different vendors for flights, car rental, and hotels on top of vendors for expense management and for providing the search and booking front end. All of this creates unnecessary friction for the end user. Voya allows a direct, simple solution for business travelers to create itineraries that comply with company policies and expense processes.
Tell me about how Voya is using AI.
We use artificial intelligence in two primary ways. Firstly, we use natural language processing to interpret chat-based user inputs. For most requests, the entire booking experience can be handled this way, but we also have a mechanism to connect the user to a live agent for more complex requests.
Secondly, we have built a proprietary flight and hotel matching engine that considers a multitude of different parameters when recommending travel options to a user. For example, companies may have price or airline restrictions, users may have a preference for a certain rewards program, and nearly all business travellers prefer shorter, more direct routes over long layovers. Our matching engine considers these factors to suggest the best flights and hotels.
What tools and technologies are you using to make this possible?
Our NLP is powered by a Java-based application using Google Dialogflow and the Layer API for messaging. The rest of our stack includes AngularJS (including Angular 4 and 5), Python, .NET, MySQL, and MongoDB in AWS via MongoDB Atlas. We also use Kubernetes which makes our deployment very portable. For example, we can leverage Google technologies while keeping our primary datastores in AWS.
How are you using MongoDB?
We use MongoDB to store data about almost every one of the approximately 1.5 million hotels in the world. The support for GeoJSON was one of the key reasons we decided to build on MongoDB, and we feel it is the best option to power our geolocation searches. By storing hotel location and metadata in MongoDB, we can then let our users easily find matching properties by generating geospatial queries behind the scenes without custom application code.
There was a learning curve with this technology. For example, we had to troubleshoot a query that was dependent on a 2D index, rather than a more appropriate 2Dsphere index to take into consideration the fact that the Earth is not flat!
Currently, we query with a bounding box, but cities are never perfect squares and are therefore best approximated with polygons. We could definitely improve the accuracy of the data we get back from this type of query by using a more complex model.
Why did you decide to use MongoDB Atlas?
Originally, Voya was built on a single EC2 instance in AWS and we were running several other tools in a similar way. Rather than spread ourselves too thin building scalable, always-on, backed up clusters ourselves, we explicitly looked for managed service—MongoDB Atlas was a great fit.
The other advantage of building on MongoDB Atlas is that it allows us to expand globally without significant time investments from our team. Our application is currently available in English and German, with most of our users in Central Europe, so we minimize latency by running our MongoDB cluster in AWS’s Frankfurt region. As our user base expands, the ability to take advantage of multi-region replication to maintain this level of service will be incredibly valuable.
What’s next for Voya?
As a full-service travel solution, we are constantly looking at fulfilling our customer’s travel needs. To us, the fragmentation in business travel, with separate travel management companies and online bookings tools, didn't make sense. That's why we've unified them in one solution. To this, we're adding expense management. Travel expenses are a huge pain for many, wasting hours of travelers’ time tracking receipts manually and filling in forms in Excel for their accounting department. Technologically, this will bring another challenge for us, as we're trying to encode applicable local legislation (which can change annually) in MongoDB. For example, returning from a business trip to Copenhagen, Denmark, and continuing onwards to Bucharest the same day, requires precise understanding of the applicable allowances.
Additionally, we're continuously investing in artificial intelligence to decrease the turnaround time for travel requests. Our travel experts are there to help reroute you if you miss your New York - London flight, but we're working towards a state where all flight and hotel requests are completely automated.
Q4 Inc. Relies on MongoDB Atlas to Boost Productivity, Outpace the Competition, and Lower Costs
Investor relations (IR) teams integrate information from finance, communications, compliance, and marketing to drive the conversation between a company, their shareholders and investors, and the larger financial community. Knowing the positive effect that a sophisticated web presence would have on investor sentiment, in 2006, Q4 Inc. (Q4) set out to provide multi-functional website solutions for IR teams. Q4 has since expanded their offerings to include capital markets intelligence, and Q4 Desktop – the industry’s first fully-integrated IR platform, which combines communications tools, surveillance, and analytics into a fully featured IR workflow and Customer Relationship Management (CRM) application.
Now with over 1,000 global clients, including many of the Fortune 500, Toronto-based Q4 is the fastest-growing provider of cloud-based IR solutions in the industry. We sat down with Alex Corotchi, VP of Technology, to learn more about their company and how they use MongoDB Atlas across their product portfolio.
Tell us about Q4 and how it’s unique in your industry.
Our goal is to provide best-in-class products for every aspect of IR so that our customers can engage with the right investors, at the right time, with the right message. We started with corporate websites, then moved into investor sites and mobile solutions. As we realized the need for a great, digital-first experience in IR, we added webcasting and teleconferencing to form a complete line of communications solutions. In 2015, we expanded into stock surveillance and capital markets intelligence. Today, we provide a full suite of IR products, many of which are integrated into Q4 Desktop.
We are unique in that we typically adopt new technologies earlier than our competition, are always pushing the boundaries, and are helping to make our customers leaders in IR.
How were you introduced to MongoDB? What problem were you trying to solve at the time?
We were introduced to MongoDB a number of years ago when we were building a small application that integrated streams from multiple social media sources. Our relational database at the time, SQL Server, made it difficult to effectively work with different data formats and data types. Instead we turned to MongoDB, which didn’t force us to define schemas upfront.
At around the same time, we were rapidly scaling the company and needed to onboard many new developers. By using MongoDB in our technology stack, and taking advantage of its ease of use and the quality of the online documentation, we were able to significantly decrease the amount of time it took to ramp new hires and make them productive. This was another main driver behind our adoption of the technology.
Today with MongoDB, I can ramp up a new developer in less than a week. I can’t say that about any other database. This is important for the business because it’s our developers that drive our products. Every day is important and we save a significant amount of money by using our time more effectively.
What applications within your product portfolio use MongoDB?
Today, three out of our four main products are supported by MongoDB in some way. Those products are: Websites, which includes corporate websites, investor websites, online reports, and newsrooms; Intelligence, which helps our clients convert capital markets information into actionable intelligence; and Q4 Desktop, our integrated IR CRM and workflow platform.
MongoDB is used for a wide variety of datasets, including news, press releases, user data, stock and market data, and social media. We run an equally wide range of queries against our data - everything from simple find operations to complex aggregations.
What other databases do you use? How do you decide when to use MongoDB versus another technology?
MongoDB is one of a few databases we use in our company. For relational data, we use either SQL Server or PostgreSQL. We also use DynamoDB for a very specific set of use cases. DynamoDB is a good service but as a database, it’s not nearly as powerful as MongoDB. There are no aggregations, the query language isn’t as elegant, and we don’t use it to store anything complex.
The majority of our products are composed of specialized microservices and for us, MongoDB is a great fit for working within this paradigm. In general, MongoDB is our go-to database for data that doesn’t map neatly into rows and columns, anytime we can benefit from not having a predefined schema, or when we need to combine multiple data structures. We’ve also found that MongoDB queries are typically more transparent than the long, complex SQL queries most of us have grown accustomed to. This can save us up to an hour a day during debugging.
How do you currently run MongoDB?
When it comes to running MongoDB, we rely on the fully managed database service, MongoDB Atlas. In our experience, it is the best automated service for running MongoDB in the cloud. Atlas provides more functionality, more customization options, and better tools than the third party MongoDB as a service providers we’ve used in the past.
What alternatives did you evaluate?
When we were first looking at MongoDB services, MongoDB Atlas had not yet been released. We started our development on Heroku where some of the third party MongoDB service providers are available as Heroku Elements (add-ons). While Heroku was great at keeping our overhead low at the start, costs grew significantly when our products began taking off.
The pricing model of the first third-party MongoDB service provider we tried (Compose) quickly became untenable. We also found that the latest versions of the database were not supported.
We migrated to another MongoDB service (mLab) but again, we encountered issues with the pricing model. Their pricing was strictly tier-based with little flexibility to tweak our deployment configuration.
Lastly, we deployed a few MongoDB clusters using a service associated with Rackspace Cloud (ObjectRocket). Once again, we found the service to be behind in delivering the latest database features and updates.
Most recently, we migrated to MongoDB Atlas because of the cost savings, and because managing a growing microservices architecture leveraging multiple MongoDB service providers with Heroku was becoming increasingly difficult. By moving to Atlas, we’re able to save money and consolidate the management of all of our MongoDB clusters.
Tell us about your migration to MongoDB Atlas.
During the migration, the Atlas team helped us ensure that everything went seamlessly. Anytime you migrate data from one place to another, it’s a big risk to the business. However, we found the Atlas live migration service to be amazing. We originally tested it with pre-production and staging environments. When all went well, we completed a rapid production migration process and didn’t even need a maintenance window. I was pleasantly surprised with how smooth our move to MongoDB Atlas was.
Which public cloud do you use?
We’re about 70-80% on AWS and the rest is on Rackspace. We don’t just use AWS as a hosting provider, but also for Alexa, Lambda, and their streaming offering.
For example, we were able to quickly use MongoDB Stitch, MongoDB’s backend as a service, to integrate Alexa Voice Server, AWS Lambda, and our data in MongoDB Atlas to deliver a voice-driven demo at one of the largest investor relations conferences.
How has your experience been with MongoDB Atlas?
We have over a dozen MongoDB clusters currently running in MongoDB Atlas across different projects, all on AWS. We are planning to migrate even more over in the next couple of months.
I like the fully managed backup service that Atlas provides as it has more functionality than anything I’ve used with other providers. The ability to restore to any point in time, restore to different projects in MongoDB Atlas, and query snapshots in place, allows us to meet our disaster recovery objectives and easily spin up new environments for testing.
Additionally, the ongoing support from the MongoDB Atlas team has been very helpful. Even the simple chat in the application UI has been very responsive. Having quick and easy access to an expert support line is like having a life preserver. We don’t want to use it much, but when it’s needed, we need to know that it will work.
Alex – I’d like to thank you for taking the time to share your insights with the MongoDB community.
Available on AWS, Azure, and GCP, MongoDB Atlas is the best way to deploy, operate, and scale MongoDB in the cloud. Get started in minutes with a 512MB database for free.
How MongoDB Atlas Helps CaliberMind Bring Customer Data to Life
As CTO, I spend most of my time behind the scenes working with our engineers making sure CaliberMind users have real-time access to massive volumes of sales and marketing data. Beyond that, in a crowded SaaS market, my co-founder and I have to stay one step ahead of the competition while keeping our cost of goods, metrics, and operations scalable.
Planning our roadmap is never easy – especially as a startup. With every decision we make, we try to limit our burn rate because of how critical every dollar is at this stage. This means we hire based on what roles will have the most impact, and it also means any vendor we work with has to be a good culture and tech fit, which is one of the reasons we went with MongoDB Atlas, MongoDB’s Database as a Service.
It's been imperative that our technology choices will stand the test of time, and also be flexible enough to pivot with us as needed. CaliberMind initially started off as a company doing deep B2B customer profiling (aka "buyer persona modeling"). We knew that we'd need to integrate with 100s of different APIs to collect all this data – plus have to deal with semi-structured data from web-scrapes, email correspondence, chat logs, support tickets, etc. Additionally, every customer consumes different content (which we can use NLP to understand), and each first-party system has customized data schemas which we must join and normalize. We soon realized that dealing with all this data was even a bigger problem for our customers than the persona modeling. This insight spurred a pivot to become a Customer Data Platform (CDP) designed for marketing and sales users. Our customers have sometimes 20+ systems that contain key data points about their customers and prospects. We needed to extract data from all these platforms like Salesforce, Marketo, Hubspot, Tableau and other silos; we needed to blend this with third-party vendor data to enrich these records and fill in missing data points. Building this clean, unified, and open data set is what our customers wanted to buy and to enable AI and automation processes like lead scoring and dynamic personalization to work more effectively. Ultimately, data without context is meaningless.
To the Cloud We Go
When I thought about how we’d manage our database, it was obvious we needed a hosted environment. First, because I wanted the comfort and security of managed backups. And second, so I could abstract the administration regardless of our cluster size. With a startup, time, like money, is limited. Because of the time crunch, it’s important to find the balance between administering an application myself and paying someone else to do it. MongoDB’s database as a service platform, MongoDB Atlas, has been the perfect balance for us.
Before choosing MongoDB, I had a lot of experience with SQL using MySQL and Postgres. When using those systems, once you create the schemas and a relationship between different tables, it's very fast and easy to understand. But it’s not agile. And as a startup, we need to be flexible and understand how we work with the data, since the data structure changes fast and often.
But with MongoDB, the storage is based on JSON, which allows us to add and remove properties and enables us to easily manipulate the data model to make a lot of progress quickly. When you’re a startup and your product iterates over time, you need to be able to make quick adjustments. Spinning up a MongoDB cluster with Atlas helps us effortlessly update data structures and makes data very accessible for our developers.
On top of the backup and ease of administration criteria, another big issue for us is compliance. Our customers trust us with their data, and we take that responsibility seriously. It’s important for us to work with a product like MongoDB Atlas that helps us handle compliance and do encryption at rest – otherwise, that would all fall on my shoulders.
MongoDB Atlas also helps with its seamless integration with AWS. Currently, we’re using Amazon Redshift as a primary pipeline from outside third-party data sources that use SQL to extract details of our data. We can process this data on our servers and then store it in MongoDB to be referenced for future use. Whether we're updating CRM records on the fly or visualizing data in our product dashboards, our users can be assured that they have immediate and performant access to their data.
Atlas to the Rescue
As part of MongoDB’s Startup Accelerator, we’ve been granted Atlas credits to help us ramp up. We chose to use Atlas over mLab for the long term because the pricing is very straightforward. Clear pricing means we can more effectively forecast how much we’ll spend in the future. The credits and pricing clarity are critical for us because burn rate is our second highest priority, after making sure our customers are happy.
As we’ve grown with Atlas, I’ve realized that architecturally, it is great for our team, giving us access to reporting and insights to make the right decisions and iterate quickly. I’m now very comfortable working with the managed database, because I get a lot of real-time activity statistics about our database’s performance. I might not use it day to day, but periodically I will quickly check and see why some queries are holding us up. We had one major update where we had to separate our caching collection and we could easily see the bottleneck through the charts provided by Atlas.
These insights helped us rapidly find a solution. Since then, everything has been stable. These insights are crucial because when I see a problem, I need to be able to quickly right the ship and Atlas is a crucial part in helping me do that.
Never On My Own
Atlas saves me a lot of headaches. As the CTO of a seed-stage company, I have to wear a lot of hats. I do CI/CD, development and spend a lot of time on growing my team professionally. When tasks get taken off my plate, that makes my life a lot easier.
I know I don't need to worry about database administration. I don't need to worry about the data itself, because there's always a backup and always someone keeping an eye on it. And if there's a question, I can reach out to real MongoDB engineers for technical support.
I also can rest easy with the knowledge that as we scale, we’ll have no issues. Scaling with MongoDB Atlas involves clicking a few buttons to get the resources we need to accommodate our increasing amount of data flow. The MongoDB Atlas dashboard provides us insight into latency and spikes that visually represent the current state of our clusters. If we see that we’re reaching a threshold, then we simply go up a tier by making a few clicks in the UI and waiting for the migration to complete.
When you’re making the decision to take something off your plate, you often take a step back and think, “I’m better off doing this on my own.” With MongoDB, I’m not. We’ve found the solution that has a big impact on our business while giving us the financial predictability we need. I’m glad I’m not doing it on my own.
Building a Secure Stock-Trading App with MongoDB Atlas
Commandiv is a stock-trading platform that using MongoDB to run complex analytics of stock performance and to backtest their trading algorithms.