GIANT Stories at MongoDB

Healthcare Compliance Platform Enables Hospitals to Achieve Significant Savings in Drug Discounts

MongoDB Delivers Performance, Scale and Functionality to Help Verity Solutions Drive Competitive Advantage with Data-intensive Application

Verity Solutions is driven to help healthcare organizations save money so they can extend care to those who need it most. Their technology enables hospitals to maintain compliance and maximize cost savings related to the healthcare industry law known as 340B. Enacted in 1992, 340B provides hospitals with economic relief from the financial burden of servicing indigent, under-insured, or uninsured patients.

The Verity 340BTM platform built on MongoDB ingests data from a variety of hospital systems, including prescription, patient health, admission, discharge and drug provisioning data. It then matches that data against internal hospital dispensaries and third-party pharmacies to determine which prescriptions are eligible for discount. In turn, qualifying hospitals can take advantage of significant discounts off average wholesale drug prices. One hospital using Verity is able to provide millions in indigent care due to savings achieved through use of Verity’s product.

When a pharmacy needs to replenish inventory, getting medications on the shelf is of utmost importance. The MongoDB-powered Verity platform delivers the operational reliability and availability to ensure hospitals can quickly restock prescriptions, all while avoiding non-compliance penalties and optimizing savings on qualified transactions.

MongoDB gives us a competitive edge for our data-intensive application,” said Mark Cassidy, CTO, Verity Solutions. “We consistently hear from hospitals that tasks that require up to ten minutes to complete on competitive solutions are performed in mere seconds on our MongoDB-powered platform.

MongoDB fit Verity’s requirements for scale, performance and operational efficacy. Developers are able to be more productive throughout the lifecycle with MongoDB’s document model that makes schema changes effortless, while ongoing administration, management and scaling operations have proven to be easy and efficient. MongoDB Cloud Manager, which provides comprehensive performance visibility and monitoring, also simplifies how Verity performs snapshot backups.

MongoDB will also enable Verity to store all customer data in a single database in order to create benchmarks across hundreds of hospitals. With no standard reference currently available, this will be one of the first time hospitals have insight into how they compare with the rest of the industry on drug spend. If a hospital’s non-discounted drug spend is higher than others in a similar cohort, for example, the new insight provided by Verity will help the hospital identify new opportunities for cost savings. At the same time, if a hospital is doing well with their savings rate, Directors of Pharmacy gain concrete ROI data to share with the CFO. Leveraging this data across their entire customer base translates to competitive advantage for Verity.

The more we can leverage the data assets we have, the more types of data we can ingest and apply various types of analytics that can be offered back to our customers,” said Cassidy. “With MongoDB, we have access to a wider scope of data that enables us to build new applications we may not have thought about in the past.

Try MongoDB Enterprise Advanced


Leaf in the Wild: comparethemarket.com Migrates to MongoDB, Bringing Apps to Market 2x Faster than Relational Databases

Out-Innovating Competitors with MongoDB, Microservices, Docker, Hadoop, and the Cloud

Internet and mobile technologies put power firmly in the hands of the customer. Nowhere more so than in the world of price comparison services, allowing users to find the best deals for them with just a few clicks or swipes. comparethemarket.com has grown to become one of the UK’s leading providers, and one of the country’s best known household brands. But this growth didn’t come from low prices or clever marketing alone. In the words of Matthew Collinge, associate director of technology and architecture at comparethemarket.com:

We view technology as a competitive advantage. When you are operating in an industry as competitive as price comparison services, you need to outpace others by delivering new applications and features faster, at higher quality and lower cost. For us, the key to this is embracing agile development methodologies, continuous delivery, open source technology and cloud computing.

I sat down with Matthew in his uber-cool Shoreditch offices in London to learn more.

Please start by telling us a little bit about your company.
comparethemarket.com was launched in 2006 and has grown rapidly over the past decade to become one of the UK’s leading price comparison websites. We provide customers with an easy way to make the right choice for them on a wide range of products including motor, home, life, travel, and pet insurance as well as utility providers and financial products such as credit cards and loans.

Please describe your application using MongoDB.
We are using MongoDB as our operational database across the increasing number of microservices that run comparethemarket.com. MongoDB has become the default database choice for building new services due to its low friction for development and performance characteristics.

Our comparison systems need to collect customer details efficiently and then securely send them to a number of different providers. Once the insurers' systems reply, comparethemarket.com can aggregate and display prices for consumers. At the same time, we need the database to run real-time analytics to personalize the customer experience across our web and mobile properties.

What were you using before MongoDB?
MongoDB was introduced in 2012 to power our home insurance comparison service. Our existing services were built on Microsoft SQL Server, but that was proving difficult to scale as our services became increasingly popular, and we were dealing with larger and larger traffic volumes. Relational databases are packed with loads of features, but we were paying for them in terms of performance overhead and reduced development agility.

Why did you select MongoDB?
We wanted to be able to configure consistency per application and per operation. Sometimes it is essential for us to be able to instantly read our own writes, so strong consistency is needed. In other scenarios, eventual consistency is fine. MongoDB gave us this tuneable consistency with performance characteristics that met our needs in ways unmatched by other databases.

Back in 2012, comparethemarket.com was purely a .NET shop running on a Windows-only infrastructure. MongoDB was one of the few non-relational databases that supported both Linux and Windows, so it was easy for developers to pick up, and for our ops teams to deploy and run.

Please describe your MongoDB deployment
We are currently running a hybrid environment, with some of the older services running in our on-premises data centers, while newer microservices are deployed to Amazon Web Services. Our goal is to move everything to the cloud.

Our on-premises architecture comprises a five node MongoDB replica set, distributed across three data centers to provide resilience and disaster recovery. Two full spec nodes are deployed in each primary datacenter and an arbiter in a secondary location. We use dedicated server blades configured with local SSD storage and Ubuntu.

In the cloud, each microservice, or logical grouping of related microservices, is provisioned with its own MongoDB replica set running in <a href="https://hub.docker.com/_/mongo/ "Docker containers, and deployed across multiple AWS Availability Zones. These are typically AWS m4.medium instances configured with encrypted EBS storage. We are looking at using MongoDB’s Encrypted storage engine to further reduce our security-related surface area. Our MongoDB cloud instances are ahead of our on-premises cluster, with some running the latest 3.2 release. Each instance is provisioned with an Ops Manager agent using an “Infrastructure-as-Code” design pattern, with full test suites and Chef recipes from a curated base AMI.

In terms of application development, we use JavaScript on Node.js as well as .Net running on CoreCLR and Docker to make orchestration easier.

What tools do you use to manage your MongoDB deployment?
Operational automation is essential in enabling us to launch new features quickly, and run them reliably at scale. We use Ops Manager to deploy replica sets and perform zero-downtime upgrades; and for backups. Unlike SQL Server which took backups once every 24 hours, during which time applications were brought to their knees, MongoDB backups are performed continuously. As a result, they run just a few seconds behind the live databases, and impose almost zero performance overhead. We can recover to any point in time, so we’re able to provide the business with much higher data guarantees and better recovery point and recovery time objectives.

Can you share best practices you have observed in scaling your MongoDB infrastructure?
There are several:

  1. Take advantage of bulk write methods to insert or modify multiple documents with a single database call. This makes it simpler and faster to load large batches of data to MongoDB.
  2. For applications that can tolerate eventual consistency, configure secondary read preferences to distribute queries across all members of the replica set.
  3. Make judicious use of indexes. MongoDB’s secondary indexes make it very fast to run expressive queries against rich data structures, but like any database, they don’t come for free. They add to your working set size and have to be updated when you write to the database.
  4. Pay attention to schema design. Make sure you model your data around the application’s query patterns. If you are using the MMAP storage engine, evaluate the performance impacts of inserting new documents, versus appending data to existing documents.

In addition to MongoDB, I understand you are using other new generations of data management infrastructure?
We are. In our previous generation of systems, all application state was stored in the database, and then imported every 24 hours from backups into our data warehouse. But this approach presented several issues: *No real-time insight: our analytics processes were working against aged data.
*Any application changes broke the ETL pipeline. *The management overhead increased as we added more applications and data volumes grew.

As we moved to microservices, we’ve modernized our data warehousing and analytics stack. While each microservice uses its own MongoDB database, it is important that we can maintain synchronization between services, so every application event is written to a Kafka queue. Event processing runs against the queue to identify relevant events that can then trigger specific actions – for example customizing customer questions, firing off emails and so on. Interesting events are written to MongoDB so we can personalize the user experience in real time as they are interacting with our service. Event processing is currently written in Node.js, but we are also evaluating using Apache Spark or Storm.

We also write these events into Hadoop where they can be aggregated and processed with historical data, in conjunction with customer data from the insurance providers. This enables us to build enriched “data products”, for example user profiles or policy offers. That data, which is output as BSON, is then imported into our operational MongoDB databases. We are investigating using AWS Lambda functions to further automate the initiation of this process.

How are you measuring the impact of MongoDB on your business?
It is important how quickly we can bring new products to market and the service quality we deliver to our customers. The technology we use is a key enabler in achieving our business goals and winning share from larger competitors.

A great example is the Meerkat Movies cinema campaign. We had just one month to build a prototype, and then another two months to iterate towards MVP (Minimal Viable Product) before launch. To make it even more interesting, it was our first major project using Node.js and AWS, rather than .NET and on-prem facilities. The project would have taken at a minimum six months to deliver if we had used a traditional relational database. It took just three months with MongoDB.

Now we are pushing new features live at least twice a week, and up 8 times a week for some projects. We are working towards continuously pushing every commit the development team makes. Docker containers, microservices, and MongoDB with its dynamic schema are at the core of our continuous delivery pipeline.

But it’s more than just speed alone. Service uptime is critical. MongoDB’s distributed design provides a major advantage over SQL Server. We can use rolling restarts to implement zero-downtime upgrades, a process that is now fully automated by Ops Manager. On our previous database, we had to take downtime for up to 60 minutes – and then for much longer intervals if we ever needed to roll back. We can distribute MongoDB across datacenters and, in AWS, across availability zones, with self-healing recovery to provide continuous availability in the event of systems outages. Failure recovery would take 15-30 minutes with SQL Server, during which time our services were down. All of that is now a long way back in the past!

Do you use any commercial services to support your MongoDB deployment?
We use MongoDB Enterprise Advanced. Ops Manager provides operational automation with fine grained monitoring, and on-demand training means we can keep our teams up to speed with the latest MongoDB skills and quickly onboard new staff. We also get direct access to MongoDB’s engineering team to quickly escalate and resolve any issues.

We have invested in running several training sessions and were in fact the first business in the UK to run a ‘War Gaming’ session. MongoDB consultants attempted to break a replica set in new and interesting ways, and our guys attempted to diagnose and remediate the issues. This exercise enabled us to harden our deployment and operational processes.

What benefits are you getting from MongoDB 3.2?
We are using 3.2 with some of our new services. We wanted to adopt the latest release to put it through its paces. One big advantage for our .NET apps is access to the new C# and .NET driver with its support for the Async/Await programming model.

What advice would you give someone who is considering using MongoDB for their next project?
The field of distributed systems requires a mind shift from the scale-up systems of the past. Development is simpler, but operations can be more complex, though tools like Ops Manager make running systems much easier. Nonetheless, make sure you understand distributed system concepts so you can engineer your applications and processes to take full advantage of MongoDB’s distributed systems design.

In conclusion, anything you’d like to add?
comparethemarket.com is hiring! We are always on the lookout for talented engineers who want to deliver business value through the application of technology. All of our latest tech openings are published to our careers page.

Matthew, thank you for taking the time to share your experiences with the community.
To learn more about building microservices architectures with Docker and MongoDB, download our guide: Containers and Orchestration Explained.

Leaf in the Wild: Swisscom Builds its New Application Cloud PaaS for Microservices with Cloud Foundry, Docker, and MongoDB Enterprise Advanced

Leaf in the Wild posts highlight real world MongoDB deployments. Read other stories about how companies are using MongoDB for their mission-critical projects.

Swisscom is leading the transformation from traditional telecommunications company to cloud services provider. Through its new Application Cloud, Swisscom is enabling independent developers through to multinational Swiss-based enterprises to build a new generation of cloud-native microservices on a highly scalable and secure Platform-as-a-Service (PaaS). I met with Marco Hochstrasser, Head of Cloud Platform Development at Swisscom, to learn more.

Can you start by telling us a little bit about your company?

Swisscom is the largest communications provider in Switzerland, delivering voice, mobile, broadband and TV services to 80% of the population. We generate annual revenues of €11bn, and employ over 20,000 people. A growing percentage of the company’s revenue comes from our enterprise business, providing companies in Switzerland with network, IT outsourcing, mobility, digital enterprise solutions and smart working. Cloud services are one of the fastest growing segments of the enterprise business. We provide the full range of infrastructure, software, and Platform-as-a-Service (PaaS) offerings for our enterprise customers in Switzerland.

Can you tell us how you are using MongoDB?

MongoDB is one of the core database offerings available through our PaaS services:

  • The Swisscom Application Cloud is a public PaaS, available to any developer. The platform allows developers to concentrate on coding, leaving management of the underlying operating systems, middleware, and databases to us. The service was launched in October 2015 and already hosts thousands of cloud-native applications in modern container technology.
  • The Swisscom Application Cloud Virtual Private is a newly launched offering that provides a dedicated PaaS for enterprise customers. It is hosted and managed in our Swiss data centers with interconnectivity directly to the customer’s own network and IT infrastructure.

Whether using the public or virtual private Application Cloud, all data is stored in Switzerland on our own network and routed via our local, state-of-the-art data centres. This enables us to guarantee maximum security and smooth operation for local developers and enterprises.

Why did you choose MongoDB as a service in the App Cloud?

We offer traditional relational databases, along with caches, message queues and search engines. We wanted to include a non-relational option, and sought feedback from the market.

MongoDB was the overwhelming choice – compared to Cassandra and Couchbase there is a significantly larger community around the product. There is much higher customer demand for MongoDB, especially from the verticals that constitute the main part of our customer base.

Please describe the technology stack powering your public and private Application Cloud We are running an open-standards based stack across our PaaS offering, which enables our customers to avoid the lock-in inherent with other cloud services. The technology includes:

  • ODM-built state of the art x86 hardware to scale efficiently
  • Plumgrid software-defined networking to interconnect with Swisscom’s existing networks
  • ScaleIO software-defined storage to bring policy-based provisioning and management to every layer of our technology stack
  • Red Hat OpenStack with KVM virtualization for the IaaS layer
  • Docker for running cloud-native containers (in Cloud Foundry), as well as persistent containers for stateful services
  • Flocker to bring persistency into our service-container framework Cloud Foundry is our PaaS layer, providing a highly standardized and widely adopted platform. Swisscom is a gold member of the Cloud Foundation and I am a member of the board.

Cloud Foundry and Docker do not currently provide persistence for stateful services, and so for databases such as MongoDB, we use Flocker from ClusterHQ to mount block storage from our software-defined storage to the container and persist its state. If the container terminates for whatever reason, Flocker transparently remounts the storage volume to the replacement container, with almost no interruption to the service or impact to the user experience. You can learn more about how we use Docker and Flocker to build a stateful database-as-a-service from our recent talk at the Tectonic Summit.

Our technology stack gives us an agile devops environment, with continuous integration and delivery to push new features and upgrades rapidly into production.

App Cloud is powering tens of thousands of microservices, managed by a small group of administrators. The key is that we heavily standardize, automate and monitor everything, and MongoDB integrates perfectly to the environment.

How do you provision and manage MongoDB within the App Cloud?

When a developer creates a service via the App Cloud UI or the API, the request goes to Cloud Foundry which calls its service broker to instantiate the MongoDB Docker container. The MongoDB Ops Manager RESTful API integrates with the Cloud Foundry service broker to provision the image, and Flocker mounts the block storage to the container.

Deploying new MongoDB Services with Cloud Foundry

Do you use any support and services for MongoDB?

Yes, we are customers of MongoDB Enterprise Advanced which provides Ops Manager, and 24x7 proactive support direct from the MongoDB technical services and development teams. This enables us to provide a better SLA to our Application Cloud customers. We have also used the Health Check from MongoDB Global Consulting Services, which delivered a detailed readiness assessment of our deployment, with best practices for always-on availability, system configuration and scaling.

How is MongoDB performing in the App Cloud?

We’ve had great traction since the initial launch of the public Application Cloud. We have ramped to over 2,000 database containers in just a few months – more than half of which are running MongoDB.

dorma+kaba Group, one of the world’s leading providers of physical security and access solutions, has developed its new Internet-based services for small and medium enterprises on the Application Cloud with MongoDB, and is able to continuously deliver updates as they release new features.

Application Cloud Virtual Private has just been released, and we already have great feedback from the market, especially as we’re offering the service hosted in Switzerland, in local Swiss data centres. One of the key reasons for the interest we are seeing in MongoDB is that we are currently the only public cloud provider offering MongoDB Enterprise Advanced-as-a-Service in Switzerland, Germany and Austria, through our partnership with MongoDB. Customers get access to the value-added features of MongoDB Enterprise, including advanced security protection with encryption, auditing and centralized authentication; coupled with the fine grained monitoring and consistent, point in time backups available with Ops Manager. The service is based on the latest MongoDB 3.2 release.

Does Swisscom use MongoDB outside of the App Cloud?

We use it extensively for Over The Top (OTT) communications services in our residential division.

  • The Swisscom IPTV platform runs MongoDB to manage the electronic program guide, Video on-Demand and radio channels for nearly 1 million subscribers
  • Our latest project is the new Swisscom myCloud service providing secure multimedia content storage and management for over 8 million prospective customers.

Marco, thank you for taking the time to share details of your App Cloud with me.

Want to learn more about enabling microservices with containers and MongoDB? Read our new white paper.

Enabling Microservices: Containers & Orchestration Explained

About the Author - Mat Keep

Mat is a director within the MongoDB product marketing team, responsible for building the vision, positioning and content for MongoDB’s products and services, including the analysis of market trends and customer requirements. Prior to MongoDB, Mat was director of product management at Oracle Corp. with responsibility for the MySQL database in web, telecoms, cloud and big data workloads. This followed a series of sales, business development and analyst / programmer positions with both technology vendors and end-user companies.

Leaf in the Wild: MongoDB at CERN – Delivering a Single View of Data from the LHC to Accelerate Scientific Research and Discovery

Leaf in the Wild posts highlight real world MongoDB deployments. Read other stories about how companies are using MongoDB for their mission-critical projects.

Multi-Data Center Data Aggregation System Accessed by over 3,000 Physicists from Nearly 200 Research Institutions Across the Globe

The European Organisation for Nuclear Research, known as CERN, plays a leading role in the fundamental studies of physics. It has been instrumental in many key global innovations and breakthroughs, and today operates the world's largest particle physics laboratory. The Large Hadron Collider (LHC) nestled under the mountains on the Swiss - Franco border is central to its research into origins of the universe.

A key part of CERN’s experiments is the Compact Muon Solenoid (CMS), a particle detector designed to observe a wide range of particles and phenomena produced in high-energy collisions in the LHC. Scientists use data collected from these collisions to search for new phenomena that will help to answer questions such as:

  • “What is the Universe really made of, and what forces act within it?”
  • “What gives everything substance?”

The CMS experiment has used MongoDB for over five years to aid the discovery and analytics of data generated from the LHC. I met up with Valentin Kuznetsov, part of the team responsible for data management in the CMS experiment, to learn more.

Can you start by telling us a little bit about what you are doing at CERN?

I am a data scientist and research associate at Cornell University where I specialize in the development of data management software for high energy physics experiments. I am also actively involved in data management for the Compact Muon Solenoid (CMS) experiment.

Learn more about Apache Spark and MongoDB.

CMS is one of the two general purpose particle physics detectors operated at the LHC. The LHC smashes groups of protons together at close to the speed of light: 40 million times per second and with seven times the energy of the most powerful accelerators built up to now. It is designed to explore the fundamental building blocks of the universe, used by more than 3,000 physicists from 183 institutions representing 38 countries. This team drives the design, construction and maintenance of the experiments across the 20PBs of data generated by the LHC each year.

How is data managed by the CMS?

Experiments of this magnitude require a vast distributed computing and storage infrastructure. The CMS spans more than a hundred data centres, handling raw data from the detector, as well as multiple simulations and associated meta-data. Data is stored in a variety of backend repositories that we call “data-services”, including relational databases, filesystems, message queues, wikis, customized applications and more.

At this scale, efficient information discovery within a heterogeneous, distributed environment becomes an important ingredient of successful data analysis. Scientists want to be able to query and combine data from all of the different data-services. The challenge for our users is that this vast and complex collection of data means they don’t necessarily know where to find the right information, or have the domain knowledge necessary to extract the data.

What role does MongoDB play in the CMS?

MongoDB powers our Data Aggregation System (DAS), providing the ability for researchers to search and aggregate information distributed across all of the CMS backend data-services, and bring that data into a single view.

The DAS is implemented as a layer on top of the data-services, allowing researchers to query data via free text-based queries, and then aggregate the results from distributed providers into a single view – while preserving data integrity, security policy and formats. When a user submits a query, it checks if MongoDB has the aggregation the user is asking for and, if it does, returns it. Otherwise the DAS performs the aggregation and saves it to MongoDB.

Why was MongoDB selected to power the DAS?

MongoDB’s flexible data model was key to our selection. We can’t know the structure of the all the different queries researchers want to run, so a dynamic schema is essential when storing results in a single view. This requirement eliminated relational databases from our evaluation process.

We could get similar schema flexibility from other non-relational databases, but what is unique about MongoDB is that it also offers a rich query language and extensive secondary indexes. This gives our users fast and flexible access to data by any query pattern – from simple key-value look-ups, through to complex search, traversals and aggregations across rich data structures, including embedded sub-documents and arrays. We also use CouchDB in other parts of our infrastructure for data replication between different endpoints.

Can you describe how MongoDB is deployed in the DAS?

MongoDB is used as an intelligent cache on top of distributed data-services. It runs on our storage backends and currently talks to about a dozen of our CMS data-services. The data-services are backed by traditional RDBMS systems based on the Oracle CERN IT cluster. The beauty of this architecture is that it allows us transparently change our data-services without impacting user access into the system. In fact, we have already changed the implementation of a few data-services without affecting our end-users.

MongoDB handles the ingestion and expiration of millions of documents managed by the DAS every day. A single query can return up to 10,000 different documents extracted from multiple data-services, which then have to be processed, aggregated and stored in MongoDB, all in real time.

The DAS helps scientists easily discover and extract information they need in their research, and it represents one of the many tools which physicists use on a daily basis towards great discoveries. Without the DAS information retrieval would take orders of magnitude longer.

MongoDB is deployed on commodity hardware configured with SSDs and running Scientific Linux. Our applications are written in Python, and we are experimenting with Go.

Do you have other projects that are using MongoDB?

We are in the beta phase of our new Workflow and Data Management (WM) archive system. Agents from systems running data processing pipelines persist log data to MongoDB, which can then be used by our administrators and data scientists to monitor job status, system throughput, error conditions and more. MongoDB provides short-term storage for the archive, providing real-time analytics to staff using MongoDB 3.2’s aggregation pipeline across the past two months of machine data. We collect the throughput of agents at specific sites and aggregate statistics such as total CPU time of the running jobs, their success/failure rates, total size of produced data, and other metrics

Data is replicated from MongoDB to Hadoop where it is converted from JSON into an AVRO format, and persisted for long-term storage. Spark jobs are run against the historic archive, with result sets loaded back to MongoDB where they update our real time analytics views, and served to monitoring and visualization apps.

Figure 1: Architecture of the CMS WMArchive system

Can you share any best practices for getting started with MongoDB?

MongoDB’s dynamic schema is great for rapidly prototyping new applications – it gives you the freedom to try out new ideas, and you can be productive in hours. It doesn’t replace the need for proper schema design, but its flexibility means you can quickly iterate to identify the optimum data model for your application.

MongoDB also has a vibrant and active community. Never be fearful of reaching out and asking questions, even if you are afraid they may seem pretty basic! MongoDB engineers and community masters are active on the mailing lists, and so you can always get help and guidance.

Valentin, thank you for sharing your experiences with the MongoDB community.

Learn more about real time analytics and MongoDB by reading our white paper:

Apache Spark and MongoDB

About the Author - Mat Keep

Mat is a director within the MongoDB product marketing team, responsible for building the vision, positioning and content for MongoDB’s products and services, including the analysis of market trends and customer requirements. Prior to MongoDB, Mat was director of product management at Oracle Corp. with responsibility for the MySQL database in web, telecoms, cloud and big data workloads. This followed a series of sales, business development and analyst / programmer positions with both technology vendors and end-user companies.

Leaf in the Wild: Leveler.com Realizes 30x Higher Performance and 3x Lower Development Overhead after Migrating to MongoDB

Leaf in the Wild posts highlight real world MongoDB deployments. Read other stories about how companies are using MongoDB for their mission-critical projects.

Leveler.com is a fast growing Software-as-a-Service (SaaS) platform for independent contractors, rapidly building out new functionality and winning new customers. However, the wrong database choice in the early days of the company slowed down the pace of development and drove up costs. That was until they moved to MongoDB.

I met with Jeremy Kelley, co-founder of Leveler.com to learn more about his experiences.

Can you start by telling us a little bit about your company? What are you trying to accomplish?

Leveler.com is a Software-as-a-Service (SaaS) platform for independent contractors, designed to make it super-easy for skilled tradespeople, such as construction professionals, to manage complex project lifecycles with the aid of mobile technology. Through the Leveler.com service, contractors can manage their customer database, generate gorgeous estimates and proposals, track work orders and client communication, organize job files and images, all the way through to managing invoicing and payment. The service can be accessed from any location and any device via our web and mobile apps.

How are you using MongoDB?

MongoDB powers our entire back-end database layer. We have implemented a multi-tenant environment, allowing contractors to securely store and manage all data related to their projects and customer base.

**Figure 1**: Leveler.com creates a single view of job details, making it fast and easy for contractors to stay on top of complex projects

What were you using before MongoDB?

We started out with Couchbase. I had some previous experience of it and CouchDB from a previous company. As many of our customers are field rather than office-based, I was attracted by its mobile capabilities. But it really didn’t work out for us.

Couchbase is fast for simple key-value lookups, but performance suffered quite a bit when doing anything more sophisticated. For example:

  • Range queries were slow as we waited for MapReduce views to refresh with the latest data written to the database. These types of queries bring important capabilities to our service – for example, contractors might want to retrieve all customers who have not been called for an appointment, or all estimates generated in the past 30 days.
  • The N1QL API showed promise, but ended up imposing additional latency and was awkward to work with for the analytics our service needs to perform.
  • Updates were inefficient as the entire document had to be retrieved over the network, rewritten at the client, and then sent back to the server where it replaced the existing document. We had to manage concurrency and conflicts in the application which added complexity and impacted overall performance of the database.
  • We also found some of the features in the mobile sync technology were deprecated with little warning or explanation.

So with all of these challenges, we migrated the backend database layer to MongoDB. Our mobile apps took advantage of client side caching in the Ionic SDK for on-device data storage to replace Couchbase Mobile.

What were the results of moving to MongoDB?

Query performance improved by an average of 16x, with some queries improved by over 30x.

Our applications were faster to develop due to MongoDB’s expressive query language, consistent indexes and powerful analytics via the aggregation pipeline. We can pull a lot of intelligence on how service is consumed using MongoDB’s native analytics capability – for example “how many pageviews did a signup from Facebook generate in the first 4 hours?” or “how many pageviews in our app originated from the Estimate form view on each day?”

Before MongoDB, our development team was spending 50% of their time on database-related development. Now it is less than 15%, freeing up time to focus on building application functionality that is growing our business.

Did you consider other alternatives besides MongoDB?

We didn’t want to get burned again with a poor technology choice, so we spent some time evaluating other options.

Cassandra was suggested, but it didn’t match our needs. It was far too hard to develop against due to its data model and eventually consistent design. If we wanted to do anything more than basic lookups, we found we would have to integrate multiple adjacent search and analytics technologies, which not only further complicates development, but also makes operations a burden.

Our engineering team has a lot of respect for Postgres, but it’s static relational data model was just too inflexible for the pace of our development. We can’t afford downtime and application-side code changes every time we adapt the schema to add a new column. So Postgres, or any other relational database, wasn’t an option for us.

Please describe your development environment.

Our backend systems are developed mainly in Python, so we use the PyMongo driver to connect to MongoDB and the excellent Mogo library for object mapping. We also use the mgo driver to connect a Go application used for analysis.

Our web application uses AngularJS and React, and our mobile apps are built on Ionic.

Which version of MongoDB are you running?

We are on the latest MongoDB 3.2 release. We have to support multiple languages in our service, and so 3.2’s enhanced text search with diacritic insensitivity is a huge win for us. MongoDB’s native text indexing has enabled us to deprecate our own internally developed search engine with minimal model changes.

Can you describe your operational environment?

Our service is powered by three MongoDB replica sets running on the Digital Ocean cloud. MongoDB’s self-healing recovery is great – unlike our previous solution, we don’t need to babysit the database. If a node suffers an outage, MongoDB’s replica sets automatically failover to recover the service. Again, this means we focus on the app, and not on operations.

All instances are provisioned by Ansible onto SSD instances running Ubuntu LTS. Monitoring is via Graphite. Our backups are encrypted and stored on AWS S3.

We are also kicking off an evaluation of MongoDB Cloud Manager which we believe will bring us even greater levels of operational automation and simplicity.

**Figure 2**: Creating stunning proposals on the move

How are you measuring the impact of MongoDB on your business?

We can innovate faster and at lower cost. We spend more time building functionality and improving user experience, and less time battling with the database.

We have achieved major application performance gains while requiring fewer servers. We also need less storage. MongoDB offers higher performance on sub-documents, enabling us to create more deeply embedded data models, which in turn has reduced the number of documents we need to store by 40%.

What advice would you give someone who is considering using MongoDB for their next project?

Just do it. MongoDB is a perfect fit for SaaS platforms. Unlike NoSQL alternatives, it can serve a broad range of applications, enforce strict security controls, maintain always-on availability and scale as the business grows. The result is that it will free you up to spend more time building great services.

Jeremy, thank you for taking the time to share your experiences with the community.


If you are wondering which database to use for your next project, download our white paper: Top 5 Considerations When Evaluating NoSQL Databases.

Top 5 Considerations When Evaluating NoSQL Databases

About the Author - Mat Keep

Mat is a director within the MongoDB product marketing team, responsible for building the vision, positioning and content for MongoDB’s products and services, including the analysis of market trends and customer requirements. Prior to MongoDB, Mat was director of product management at Oracle Corp. with responsibility for the MySQL database in web, telecoms, cloud and big data workloads. This followed a series of sales, business development and analyst / programmer positions with both technology vendors and end-user companies.

Leaf in the Wild: India’s Largest Publisher Unlocks Behavioral Insight with MongoDB-Powered Real Time Web Analytics Engine

Leaf in the Wild posts highlight real world MongoDB deployments. Read other stories about how companies are using MongoDB for their mission-critical projects.

Times Internet Limited, the largest news publisher in India, relies on MongoDB to power its editorial analytics engine, serving more than 150 million readers with customized content and experiences. I had the chance to meet with Gagan Bajpai and Gyan Mittal, Senior Managers, Central Technical Team at Times Internet Limited (TIL) to learn more about how they use MongoDB.

“We don’t ask, ‘Why MongoDB?’ anymore. Now we ask, “Why would we use anything else?’” Gyan Mittal, Senior Manager Central Technical Team, Times Internet Limited.

Please start by telling us a little bit about Times Internet.

Times of India Group is India's largest media and entertainment company. All of its digital platforms are run by TIL. Our websites are among the fastest growing Web and Mobile properties worldwide. Since its inception in 1999 Times Internet has led the Internet revolution in India, emerging as India's foremost web entity, and now running diverse portals and specialized websites.

TIL properties reach over 150M visitors and serve 2 billion page views every month across web and mobile channels. We have brands across news, entertainment, sports, e-commerce, classifieds, startup investments, local partnerships, and more. Today, we have a diversified set of 22+ digital consumer-facing businesses.

Tell us how you use MongoDB.

We use MongoDB for a range of applications. This includes the content management system journalists use to upload stories, e-commerce gateways, newsletter and alerts apps, and the social platform that covers all of our web properties. We are also using MongoDB Cloud Manager for on-demand scaling and automated backups for disaster recovery.

Our web analytics engine is the most critical and highest-profile application running on MongoDB. It was the first MongoDB project in TIL, and we launched it to the business in late 2010. Our editorial staff and product managers rely on the analytics engine to deepen engagement with our 150M+ audience. The engine tracks and analyses user engagement with every published story, providing feedback on how content is consumed through heat-maps and analytics dashboards. Site editors gain insights into the length of time spent per page, how content is shared across social networks, and where readers focus their attention. The analytics generated by MongoDB enable editorial staff to make data-driven decisions, improving future content to better address reader preferences, including tweaking headlines, moving copy, A/B testing of alternative images, and altering page layouts. TIL’s engine also provides personalized content recommendations based on reader’s browsing habits. Collectively, these capabilities ensure the sites’ articles are reaching and engaging with the broadest possible audience.

Did you consider other databases for your app? What made you select MongoDB?

My team at TIL all come from a relational database background and have massive respect for that technology. But our web analytics application presented us with a classic “big data” problem:

  • We had to deal with large volumes of data generated by monitoring user activity tracking how content is consumed on our site.
  • This data was coming in at high velocity from millions of concurrent users.
  • We capture many different attributes of user behavior, so our database and analytics engine needs to handle wide variety of data structures.

In addition, development agility was critical. To put this into context, Internet growth in India is much faster than pretty much anywhere else in the world. We have a huge population who are now getting access to the Internet via low-cost mobile platforms. So competition is intense, and time to market is critical. We also knew that our application would need to continually evolve to keep pace with features the business would ask us to add. So a flexible and dynamic schema was also critical to give us the agility we needed. Working with a data model that eliminates the traditional object-relational impedance mismatch would allow our developers to move with much higher velocity in building the app.

Because of all of these factors, we felt a non-relational database would be a better fit for the web analytics app.

That said, what we love about relational databases is the ability to run deep and complex queries against the data. And this is also where MongoDB excels. Unlike NoSQL databases that require you to integrate a search engine or replicate data to dedicated analytics nodes or Hadoop, MongoDB enables us to run rich queries against in-place data, all in real time. MongoDB’s aggregation pipeline powers our heat maps and dashboards. It is much more performant and easier to use than MapReduce.

The MongoDB query language and secondary indexes give us a much more powerful framework to access and analyze multi-structured data than anything simple key-value stores can provide.

Developer velocity. That’s what I am focused on. How fast can we get this robust application live in the shortest amount of time? Our team built the analytics engine in a fraction of the time it would have taken on any other database and then it scaled beautifully to help us understand and engage with millions of readers.

Please describe your MongoDB deployment.

Our total MongoDB estate is around 50 nodes, powering multiple apps. Most apps are powered by a single replica set configured with two data nodes and an arbiter. This provides the ideal balance between high availability and operational efficiency.

Our web analytics platform is deployed on a sharded cluster. This gives us the scalability we need. We have around 1.5TB of active data in the cluster. The application itself is written in Java.

We run MongoDB on Linux-based servers hosted by Rackspace in a co-location facility.

Do you use any commercial services to support your MongoDB deployment?

We use MongoDB Professional to back the web analytics platform. Break/fix support is important, but as our deployment and our team grows, it’s good to be able to get regular check-ins with MongoDB engineers, and review things like schema design and best practices for operational processes.

As our deployment has grown, we are also now starting to evaluate the MongoDB Cloud Manager. Automated configuration and deployment can simplify on-demand scaling and upgrades, and the backup service enhanced our disaster recovery capability.

What has been the business outcome of using MongoDB for your web analytic engine?

We have demanding managers and editors looking to understand quickly how our readers are engaging with the news.

MongoDB is the solution that helps us turn heavy raw data into actionable insights that fundamentally change the way we deliver content.

Do you have plans to use MongoDB for other applications?

We don’t ask, ‘Why MongoDB?’ anymore. Now we ask, ‘Why would we use anything else?’.

What advice would you give someone who is considering using MongoDB for their next project?

Don’t just follow the crowd. Don’t just choose the same technology you have always chosen. There is so much innovation happening today, and the databases of the last decade are not always the right choice.

Once you have a short list of potential technologies, test them with your app, your queries, and your data. It is the only way to be sure you are choosing the right technology going forward.

Gyan and Gagan, thank you both for your time, and sharing your experiences with the MongoDB community.


Are you building big data applications? Read Big Data Examples and Guidelines to get started.
Read the Big Data Examples and Guidelines

About the Author - Mat Keep

Mat is a director within the MongoDB product marketing team, responsible for building the vision, positioning and content for MongoDB’s products and services, including the analysis of market trends and customer requirements. Prior to MongoDB, Mat was director of product management at Oracle Corp. with responsibility for the MySQL database in web, telecoms, cloud and big data workloads. This followed a series of sales, business development and analyst / programmer positions with both technology vendors and end-user companies.

Leaf in the Wild: Haymarket Migrates from MySQL to MongoDB, Achieving 8x Higher Platform Efficiency

The media industry is undergoing a fundamental transformation as the move from print to web and mobile accelerates. I met with Peter Dignan, Platform Director at Haymarket Media Group to discuss how the use of modern database platforms enables media companies to embrace new channels and engage with a global audience.

Leaf in the Wild : Square Enix fait évoluer TOMB RAIDER, HITMAN ABSOLUTION, DEUS EX et bien d'autres titres, avec MongoDB

Note de l'éditeur : Cette publication a été actualisée le 30 juin 2015, en raison du passage de MongoDB Management Service (MMS) à MongoDB Cloud Manager. Pour en savoir plus, cliquez ici.

MongoDB Enterprise Advanced permet à un seul administrateur d'assurer une disponibilité continue, 24h/24, 7j/7, sur des dizaines de clusters partitionnés

Avec des titres aussi populaires que TOMB RAIDER et FINAL FANTASY, Square Enix compte parmi les principales entreprises internationales d'édition de jeux vidéos. Suite à sa politique de développement de jeux en ligne, Square Enix a rapidement atteint les limites de scalabilité de ses bases de données relationnelles, et a donc décidé de migrer vers MongoDB. En adoptant une base de données en mode service mutualisée, Square Enix a pu consolider ses instances de base de données, améliorant ainsi ses performances et sa fiabilité. Des outils opérationnels avancés permettent aux équipes opérationnelles de Square Enix de monter en charge des dizaines de clusters de bases de données à la demande et d'assurer une disponibilité 24h/24, 7j/7 à ses joueurs, partout dans le monde, en employant un seul administrateur.

Tandis que le salon E3 Expo de cette année approche, j'ai pu m'entretenir avec Tomas Jelinek, Administrateur des opérations en ligne senior chez Square Enix Europe pour parler de la manière dont ses plateformes avaient évolué pour répondre à la demande de millions de joueurs connectés partout dans le monde.

Pouvez-vous commencer par nous présenter brièvement Square Enix ?

Square Enix est l'un des principaux fournisseurs internationaux de contenus de divertissements numériques. Son influence est prépondérante. Notre catalogue contient de grands noms du jeu vidéo, tels que FINAL FANTASY, DRAGON QUEST, TOMB RAIDER, HITMAN, DEUS EX et JUST CAUSE, qui, ensemble, se sont vendus à des millions d'exemplaires.

Nous cherchons toujours à repousser les limites de la créativité et de l'innovation en fournissant des contenus et des produits de divertissement de qualité. Comme le secteur du jeu vidéo se situe au croisement des Big Data, de la mobilité et du Cloud computing, nos plateformes d'infrastructure sont essentielles pour garder un avantage compétitif et offrir à nos joueurs une expérience unique.

Nous avons quelques annonces très intéressantes à faire à l'occasion de l'E3, et elles me rendent particulièrement fier et heureux de travailler au sein de l'équipe opérationnelle dont les efforts concourent au bon fonctionnement de nos plateformes de jeux.

Expliquez-nous comment Square Unix utilise MongoDB.

Au début, nos jeux fonctionnaient sur une plateforme ou les données étaient stockées dans des bases de données relationnelles à l'ancienne. Mais cela n'était pas suffisant pour supporter la multiplication de nos titres, la complexification de nos jeux et la croissance incroyable de nos jeux de données. Nous avons donc développé notre suite en ligne mutualisée : une infrastructure centralisée et partagée utilisée dans toute l'entreprise. Nous avons fourni MongoDB en tant que service à l'ensemble de nos studios et développeurs. Dans le cadre de cette suite en ligne, nous avons développé une API qui permet à nos studios d'utiliser MongoDB pour stocker et gérer des indicateurs, des profils de joueurs, des informations d'équipe, des tableaux de classements et des concours. Nous utilisons également MongoDB pour permettre à nos joueurs de partager des messages sur toutes les plateformes prises en charge telles que PlayStation, Xbox, PC, interfaces web, iOS, Android, etc. L'objectif principal de la suite en ligne est de prendre en charge les fonctionnalités requises pour tous les jeux.

Chaque titre requiert également la prise en charge de ses propres fonctionnalités dans le jeu, c'est pourquoi chacun d'eux bénéficie d'une infrastructure dédiée, connectée à MongoDB, pour stocker son statut et les indicateurs des joueurs, ainsi que des fonctionnalités et contenus spécifiques. Par exemple, Hitman Absolution offre la possibilité aux joueurs de créer leurs propres contrats, puis de les partager avec d'autres joueurs. Tout ceci est géré par MongoDB.

Nous utilisons également MongoDB pour nos services de messageries inter-jeu et de joueur à joueur. Nos sites web et mobiles utilisent MongoDB pour la gestion des contenus et des catalogues de produits.

*Just Cause 3 : prêt à jaillir sur n'importe quelle plateforme près de chez vous. Just Cause 3 © 2015 Square Enix Ltd*

Quelles technologies utilisiez-vous avant MongoDB ? Êtes-vous partis de zéro, ou avez-vous migré à partir d'une autre base de données ?

Le passage vers les jeux en ligne a commencé en 2007. Nous avons utilisé des bases de données relationnelles pour stocker des profils de joueurs et des tableaux de classements, mais aussi pour analyser les indicateurs collectés à partir de l'utilisation de nos jeux. Mais comme notre public en ligne a augmenté, la montée en charge de nos bases de données a commencé à nous poser problème. Nous avons fini par réunir nos équipes de consultants afin d'acquérir le matériel nécessaire pour améliorer nos capacités, mais les bases de données en place ne suffisaient plus.

Nous avons commencé à nous poser de plus en plus de questions sur le devenir de nos données. L'exécution d'une requête d'analyse complexe dans notre base de données relationnelle pouvait prendre jusqu'à 3 semaines ! Nous savions qu'il était temps de chercher une alternative.

Comment avez-vous appris l'existence de MongoDB ? Avez-vous envisagé d'autres possibilités ?

Nous avons lancé nos recherches en vue d'un remplacement dès 2011. Nous avions besoin d'une base de données pouvant répondre aux besoins de nos équipes de développeurs à Montréal et de notre équipe des opérations, ici, à Londres.

Les équipes de développement se souciaient particulièrement de la vitesse à laquelle elles pouvaient développer de nouveaux jeux et ajouter des fonctionnalités pour maximiser le cycle de vie de leurs jeux. Elles devaient également s'assurer que la nouvelle base de données pouvait prendre en charge toutes les fonctionnalités opérationnelles et analytiques dont elles avaient besoin. Il leur fallait donc un schéma flexible et un langage de requête expressif.

Notre équipe opérationnelle, de son côté, devait avaliser les capacités de scalabilité et de robustesse de la base de données. Nous ne pouvions faire, en effet, aucun compromis sur la qualité de l'expérience de nos utilisateurs, quand bien même les développeurs eussent été pleinement satisfaits. Pour éviter tout problème de gestion courante, nous devions également vérifier que la base de données pouvait s'adapter à nos flux de travail opérationnels et à nos outils.

Chaque équipe a effectué ses propres évaluations. Nous avons établi une grille d'évaluation et examiné différentes technologies de bases de données, nouvelles ou déjà éprouvées, parmi lesquelles MongoDB. Nous avons engagé un consultant externe pour nous guider durant ce processus.

Toutes les équipes en sont venues à la même conclusion : la technologie de MongoDB est la plus adaptée à notre plateforme de jeux de nouvelle génération. Et ces requêtes complexes qui nous prenaient 3 semaines avec nos bases de données relationnelles... nous les effectuons à présent en 2 minutes dans MongoDB ! La vitesse des analyses en jeu est démultipliée.

Dans le contexte de l'évolution des bases de données, 2011 semble déjà loin... et je suis sûr que les produits que nous avons évalué ont grandement évolué. Mais nous n'avons jamais regretté d'avoir choisi MongoDB.

Pouvez-vous décrire votre déploiement de MongoDB ?

Aujourd'hui, nous exécutons principalement MongoDB 2.6. Chaque instance de serveur de jeu est déployée sur une machine virtuelle exécutant Ubuntu Linux, Nginx et Jetty, qui se connectent à MongoDB avec le pilote Java.

Notre suite en ligne mutualisée est provisionnée sur le cluster MongoDB principal, doté de 10 partitions. Chaque partition est configurée par un jeu de réplicas à 3 nœuds exécutant un nœud primaire, un nœud secondaire et un arbitre de jeu de répliquas. Chaque instance MongoDB exécute son propre serveur physique dans nos centres de données.

Nous utilisons une autre architecture pour les clusters dédiés aux autres projets prenant en charge des titres individuels. La charge de nos jeux sur le serveur principal est en dents de scie. Il est courant de provisionner plus de 60 serveurs frontaux pour prendre en charge le lancement d'un nouveau jeu, puis de réduire cette charge en fonction du trafic. Nos équipes marketing proposent régulièrement des promotions sur certains jeux. Avec cette approche, il nous suffit d'ajouter des nœuds au cluster en cas de besoin. Ce type de flexibilité est essentiel pour nous permettre de conserver nos coûts aussi bas que possible en évitant un provisionnement à outrance. MongoDB nous a offert une couche de persistance assez solide pour prendre en charge une telle approche.

Plusieurs de nos clusters de jeux dédiés sont déployés sur AWS, et nous avons commencé à utiliser MongoDB Cloud Manager pour automatiser le provisionnement et la configuration, mais aussi pour gérer les mises à niveau.

Je trouve que Cloud Manager est particulièrement utile : il nous permet de gagner un temps précieux. Grâce à lui, nous pouvons monter en charge notre infrastructure sans avoir à augmenter les effectifs de notre personnel opérationnel. Cela signifie que je peux être plus productif et quitter le travail plus tôt !

Provisionnement de MongoDB automatisé sur AWS EC2

Quels outils utilisez-vous pour gérer votre déploiement MongoDB ?

En plus du provisionnement, nous utilisons également Cloud Manager pour collecter des données de télémétrie à partir de nos clusters MongoDB. Nous surveillons en continu et gérons les alertes d'indicateurs clés tels que les compteurs d'opérations, les files d'attentes, les connexions, l'utilisation de la mémoire et du processeur, et les opérations d'entrée-sortie par seconde (IOPS) sur les disques. En établissant des référentiels et des alertes sur ces indicateurs clés, nous pouvons automatiser la montée en charge de MongoDB pour répondre à l'augmentation du trafic, avant que notre service commence à se dégrader.

Pour notre infrastructure principale globale, nous utilisons Nagios et Cacti pour la surveillance et l'automatisation de tous les éléments utilisant Ansible et Puppet.

Nos propriétés Web et mobiles exécutent une configuration légèrement différente, dans laquelle MongoDB est connecté à des services Ruby et Docker pour l'orchestration.

** Pouvez-vous partager certaines de vos meilleures pratiques relatives à la montée en charge de votre infrastructure MongoDB ?**

Pour commencer, il est essentiel de mettre en place une surveillance adéquate. Il ne faut jamais attendre que le système soit en surcharge pour ajouter de la capacité. En effet, s'il reste peu de ressources à votre système, vous aurez beaucoup de mal à effectuer une montée en charge sans affecter les performances de votre application.

Je vous conseille également d'éviter de colocaliser trop de ressources entre différentes applications complètement différentes. Par exemple, le partage de nœuds ou de serveurs de configuration peut compliquer les mises à niveau. C'est pourquoi, je vous recommande d'isoler les composants lorsque vos applications ont des modèles de requêtes et des profils de trafic très différents.

Avez-vous intégré MongoDB à d'autres plateformes d'analyse de données ?

Nous utilisons Pentaho pour afficher sur un tableau de bord les données stockées dans MongoDB.

Les données d'indicateurs collectées à partir des jeux et des joueurs sont stockées dans MongoDB, puis chargées dans un cluster Hadoop Cloudera, afin de mener des analyses hors-ligne plus affinées.

Utilisez-vous des services MongoDB pour prendre en charge votre déploiement ?

Oui. Nous utilisons MongoDB Enterprise Advanced, pour bénéficier d'un accès aux services d'assistance technique de MongoDB. En cas de problème, l'équipe d'assistance nous aide rapidement. Par exemple, à Noël dernier, nous avons subi une dégradation des performances sur deux clusters. Les ingénieurs de l'assistance technique de MongoDB ont pu rapidement diagnostiquer le problème et en déterminer la cause. Il s'agissait d'un problème matériel sous-jacent. Nous avons ainsi évité tout impact sur l'expérience client pendant l'une de nos périodes les plus chargées de l'année. C'est pourquoi l'assistance technique de MongoDB nous est si précieuse.

Comment pouvez-vous quantifier l'influence de MongoDB sur vos activités ?

L'expérience client est au cœur de tout. Pour garantir sa qualité, nous devons pouvoir procéder rapidement à une montée en charge à la demande lors du lancement de nouveaux jeux. Nous ne pouvons pas tolérer le moindre temps d'arrêt. Nos services doivent rester disponibles 24h/24, 7j/7, durant les pannes et les maintenances planifiées. L'architecture de MongoDB nous permet tout cela.

Nous pouvons à tout moment ajouter et supprimer des nœuds MongoDB des clusters partitionnés. Ceci est crucial lors de l'exécution de différentes tâches de gestion. La quantité de données que nous collectons augmente constamment et les capacités de MongoDB peuvent évoluer au même rythme. L'arrivée d'un jeu génère généralement 120 Go de données par jour. Si on considère que ces données s'ajoutent à celles générées par les autres jeux et à celles des années passées, on peut imaginer à quel point le jeu de données augmente rapidement. Les nouvelles données sont stockées sur des disques rapides, tandis que les anciennes données sont migrées vers des disques plus lents.

Les jeux de réplicas améliorent notre capacité de tolérance et les mises à niveau propagées nous permettent de modifier notre plateforme en maintenant nos services en ligne.

MongoDB nous a permis d'acquérir une efficacité opérationnelle qui est tout à fait cruciale. Je suis en mesure d'exécuter l'intégralité de notre domaine MongoDB pratiquement seul. Cloud Manager, associé à un support proactif, a été essentiel pour parvenir à un tel niveau d'efficacité.

Avez-vous prévu une mise à niveau vers MongoDB 3.0 ?

Nous trouvons cette nouvelle version très attrayante. Son contrôle des accès simultanés plus granulaire permettra d'améliorer nos performances, surtout pour les applications gourmandes en écriture. Comme nos jeux de données continuent d'augmenter, la compression est très importante pour nous permettre d'optimiser notre espace de stockage. Nous prévoyons donc de passer à MongoDB 3.0 cette année.

Quel conseil pourriez-vous donner à quelqu'un qui songerait à utiliser MongoDB pour son prochain projet ?

Il n'y a rien de plus simple que d'essayer MongoDB. Avec Cloud Manager, vous pouvez mettre en place de nouvelles instances sur AWS en quelques secondes. Il vous suffit de charger vos données, et vous pouvez commencer votre test.

Tomas, merci d'avoir pris le temps de partager ces informations avec la communauté MongoDB.


Vous voulez essayer Cloud Manager ? Souscrivez à notre version d'essai gratuit de 30 jours :

Essayer Cloud Manager gratuitement


Les jeux Square Enix s'exécutant sur MongoDB incluent Tombe Raider, Lara Croft and the Guardian of Light, Lara Croft and the Temple of Osiris, Hitman Absolution et Hitman Go:, Deus Ex, Thief, Just Cause 3, Sleeping Dogs, Life is Strange et Nosgoth. DEUS EX, DRAGON QUEST, FINAL FANTASY, HITMAN, JUST CAUSE, SQUARE ENIX, le logo SQUARE ENIX, et TOMB RAIDER sont des marques commerciales ou des marques déposées de Square Enix Group. Toutes les autres marques commerciales appartiennent à leurs propriétaires respectifs.