Leaf in the Wild: comparethemarket.com Migrates to MongoDB, Bringing Apps to Market 2x Faster than Relational Databases
Out-Innovating Competitors with MongoDB, Microservices, Docker, Hadoop, and the Cloud
Internet and mobile technologies put power firmly in the hands of the customer. Nowhere more so than in the world of price comparison services, allowing users to find the best deals for them with just a few clicks or swipes. comparethemarket.com has grown to become one of the UK’s leading providers, and one of the country’s best known household brands. But this growth didn’t come from low prices or clever marketing alone. In the words of Matthew Collinge, associate director of technology and architecture at comparethemarket.com:
We view technology as a competitive advantage. When you are operating in an industry as competitive as price comparison services, you need to outpace others by delivering new applications and features faster, at higher quality and lower cost. For us, the key to this is embracing agile development methodologies, continuous delivery, open source technology and cloud computing.
I sat down with Matthew in his uber-cool Shoreditch offices in London to learn more.
Please start by telling us a little bit about your company.
comparethemarket.com was launched in 2006 and has grown rapidly over the past decade to become one of the UK’s leading price comparison websites. We provide customers with an easy way to make the right choice for them on a wide range of products including motor, home, life, travel, and pet insurance as well as utility providers and financial products such as credit cards and loans.
Please describe your application using MongoDB.
We are using MongoDB as our operational database across the increasing number of microservices that run comparethemarket.com. MongoDB has become the default database choice for building new services due to its low friction for development and performance characteristics.
Our comparison systems need to collect customer details efficiently and then securely send them to a number of different providers. Once the insurers' systems reply, comparethemarket.com can aggregate and display prices for consumers. At the same time, we need the database to run real-time analytics to personalize the customer experience across our web and mobile properties.
What were you using before MongoDB?
MongoDB was introduced in 2012 to power our home insurance comparison service. Our existing services were built on Microsoft SQL Server, but that was proving difficult to scale as our services became increasingly popular, and we were dealing with larger and larger traffic volumes. Relational databases are packed with loads of features, but we were paying for them in terms of performance overhead and reduced development agility.
Why did you select MongoDB?
We wanted to be able to configure consistency per application and per operation. Sometimes it is essential for us to be able to instantly read our own writes, so strong consistency is needed. In other scenarios, eventual consistency is fine. MongoDB gave us this tuneable consistency with performance characteristics that met our needs in ways unmatched by other databases.
Back in 2012, comparethemarket.com was purely a .NET shop running on a Windows-only infrastructure. MongoDB was one of the few non-relational databases that supported both Linux and Windows, so it was easy for developers to pick up, and for our ops teams to deploy and run.
Please describe your MongoDB deployment
We are currently running a hybrid environment, with some of the older services running in our on-premises data centers, while newer microservices are deployed to Amazon Web Services. Our goal is to move everything to the cloud.
Our on-premises architecture comprises a five node MongoDB replica set, distributed across three data centers to provide resilience and disaster recovery. Two full spec nodes are deployed in each primary datacenter and an arbiter in a secondary location. We use dedicated server blades configured with local SSD storage and Ubuntu.
In the cloud, each microservice, or logical grouping of related microservices, is provisioned with its own MongoDB replica set running in <a href="https://hub.docker.com/_/mongo/ "Docker containers, and deployed across multiple AWS Availability Zones. These are typically AWS m4.medium instances configured with encrypted EBS storage. We are looking at using MongoDB’s Encrypted storage engine to further reduce our security-related surface area. Our MongoDB cloud instances are ahead of our on-premises cluster, with some running the latest 3.2 release. Each instance is provisioned with an Ops Manager agent using an “Infrastructure-as-Code” design pattern, with full test suites and Chef recipes from a curated base AMI.
In terms of application development, we use JavaScript on Node.js as well as .Net running on CoreCLR and Docker to make orchestration easier.
What tools do you use to manage your MongoDB deployment?
Operational automation is essential in enabling us to launch new features quickly, and run them reliably at scale. We use Ops Manager to deploy replica sets and perform zero-downtime upgrades; and for backups. Unlike SQL Server which took backups once every 24 hours, during which time applications were brought to their knees, MongoDB backups are performed continuously. As a result, they run just a few seconds behind the live databases, and impose almost zero performance overhead. We can recover to any point in time, so we’re able to provide the business with much higher data guarantees and better recovery point and recovery time objectives.
Can you share best practices you have observed in scaling your MongoDB infrastructure?
There are several:
- Take advantage of bulk write methods to insert or modify multiple documents with a single database call. This makes it simpler and faster to load large batches of data to MongoDB.
- For applications that can tolerate eventual consistency, configure secondary read preferences to distribute queries across all members of the replica set.
- Make judicious use of indexes. MongoDB’s secondary indexes make it very fast to run expressive queries against rich data structures, but like any database, they don’t come for free. They add to your working set size and have to be updated when you write to the database.
- Pay attention to schema design. Make sure you model your data around the application’s query patterns. If you are using the MMAP storage engine, evaluate the performance impacts of inserting new documents, versus appending data to existing documents.
In addition to MongoDB, I understand you are using other new generations of data management infrastructure?
We are. In our previous generation of systems, all application state was stored in the database, and then imported every 24 hours from backups into our data warehouse. But this approach presented several issues:
*No real-time insight: our analytics processes were working against aged data.
*Any application changes broke the ETL pipeline.
*The management overhead increased as we added more applications and data volumes grew.
As we moved to microservices, we’ve modernized our data warehousing and analytics stack. While each microservice uses its own MongoDB database, it is important that we can maintain synchronization between services, so every application event is written to a Kafka queue. Event processing runs against the queue to identify relevant events that can then trigger specific actions – for example customizing customer questions, firing off emails and so on. Interesting events are written to MongoDB so we can personalize the user experience in real time as they are interacting with our service. Event processing is currently written in Node.js, but we are also evaluating using Apache Spark or Storm.
We also write these events into Hadoop where they can be aggregated and processed with historical data, in conjunction with customer data from the insurance providers. This enables us to build enriched “data products”, for example user profiles or policy offers. That data, which is output as BSON, is then imported into our operational MongoDB databases. We are investigating using AWS Lambda functions to further automate the initiation of this process.
How are you measuring the impact of MongoDB on your business?
It is important how quickly we can bring new products to market and the service quality we deliver to our customers. The technology we use is a key enabler in achieving our business goals and winning share from larger competitors.
A great example is the Meerkat Movies cinema campaign. We had just one month to build a prototype, and then another two months to iterate towards MVP (Minimal Viable Product) before launch. To make it even more interesting, it was our first major project using Node.js and AWS, rather than .NET and on-prem facilities. The project would have taken at a minimum six months to deliver if we had used a traditional relational database. It took just three months with MongoDB.
Now we are pushing new features live at least twice a week, and up 8 times a week for some projects. We are working towards continuously pushing every commit the development team makes. Docker containers, microservices, and MongoDB with its dynamic schema are at the core of our continuous delivery pipeline.
But it’s more than just speed alone. Service uptime is critical. MongoDB’s distributed design provides a major advantage over SQL Server. We can use rolling restarts to implement zero-downtime upgrades, a process that is now fully automated by Ops Manager. On our previous database, we had to take downtime for up to 60 minutes – and then for much longer intervals if we ever needed to roll back. We can distribute MongoDB across datacenters and, in AWS, across availability zones, with self-healing recovery to provide continuous availability in the event of systems outages. Failure recovery would take 15-30 minutes with SQL Server, during which time our services were down. All of that is now a long way back in the past!
Do you use any commercial services to support your MongoDB deployment?
We use MongoDB Enterprise Advanced. Ops Manager provides operational automation with fine grained monitoring, and on-demand training means we can keep our teams up to speed with the latest MongoDB skills and quickly onboard new staff. We also get direct access to MongoDB’s engineering team to quickly escalate and resolve any issues.
We have invested in running several training sessions and were in fact the first business in the UK to run a ‘War Gaming’ session. MongoDB consultants attempted to break a replica set in new and interesting ways, and our guys attempted to diagnose and remediate the issues. This exercise enabled us to harden our deployment and operational processes.
What benefits are you getting from MongoDB 3.2?
We are using 3.2 with some of our new services. We wanted to adopt the latest release to put it through its paces. One big advantage for our .NET apps is access to the new C# and .NET driver with its support for the Async/Await programming model.
What advice would you give someone who is considering using MongoDB for their next project?
The field of distributed systems requires a mind shift from the scale-up systems of the past. Development is simpler, but operations can be more complex, though tools like Ops Manager make running systems much easier. Nonetheless, make sure you understand distributed system concepts so you can engineer your applications and processes to take full advantage of MongoDB’s distributed systems design.
In conclusion, anything you’d like to add?
comparethemarket.com is hiring! We are always on the lookout for talented engineers who want to deliver business value through the application of technology. All of our latest tech openings are published to our careers page.
Matthew, thank you for taking the time to share your experiences with the community.
To learn more about building microservices architectures with Docker and MongoDB, download our guide: Containers and Orchestration Explained.