Server Density

Server Density is a real-time server and website monitoring solution with mobile and web interfaces. Founder David Mytton is an early MongoDB adopter and has been using the technology since 2009 (pre-MongoDB v1.0), making Server Density one of the longest-running production deployments. The company has a MongoDB-First philosophy and is using MongoDB in multiple ways within its application, from storing time-series data to queuing. According to Mytton, “MongoDB is a very good general purpose database that is extremely fast and easy to scale, making it my first choice for any project.”

The Problem

Server Density must ensure that massive amounts of data collected from hundreds of thousands of systems can be constantly refreshed and immediately available to customers. Server Density requires highly-performant writes in order to handle the thousands of requests per second and every event that a server generates. In this high-throughput environment, the ability to scale while maintaining performance is critical.

Initially, MySQL served as the backend of the Server Density application. Since the monitoring agent reports back every 60 seconds on every running process, this resulted in millions of rows, for a single month, in MySQL, per server monitored. The company wanted to scale using replication but given the volume of data, MySQL had a hard time keeping up, especially with the initial sync. Scaling MySQL onto multiple clustered servers would also be difficult. (See Mytton’s blog for more details.)

After evaluating multiple non-relational databases, Server Density turned to MongoDB for its efficient, reliable and easy to use replication, as well as the native, high performance PHP and Python drivers that were supported by MongoDB.

Why MongoDB?

Initially, Server Density stored 40 GB of data in MySQL. Today, MongoDB ingests more than 12 TB per month for various use cases within the application:

  • Customer data store – stores account information, login/password, user emails, etc.
  • Time-series data store – stores large quantities of time-series data. Server Density collects original values and performance calculations on server data (e.g. disk and memory usage over time) and generates real-time status reports and graphs.
  • Messaging queue – ingests messages (such as application events and instructions for background jobs) at high rates. MongoDB offers lightweight, fast queuing that stores everything in memory, together with automatic failover for deployment across multiple data centers.

HIGH AVAILABILITY WITH AUTOMATIC FAILOVER

Compared to MySQL and other databases, replication is built directly into MongoDB and is easy to set up. Plus, failover between servers happens automatically, so Server Density doesn’t have to worry about manually reconfiguring a cluster in the event of a failure.

MongoDB ensures that multiple copies of data are in sync across data centers in multiple geographic regions, protecting Server Density’s data if they lose a server and delivering a more reliable, stable application for customers. In many cases, a failover is noticed after it has occurred through info-only alerts, rather than because of system failure.

SIMPLIFIED SCALABILITY

When new servers are added, MongoDB handles distributing data between them and sending queries to the correct shards, eliminating the need for Server Density to build that into their application. As a result of MongoDB’s ability to provide scalability with sharding, and redundancy with replica sets, developers can focus on building new features for the application rather than engineering custom database failover mechanisms.

SUBSCRIPTIONS

With direct access to the people developing the database via MongoDB’s commercial subscriptions (which include support), Mytton’s team is able to quickly get help, which “makes a big difference if something goes wrong.”

Deployment

  • OS: Ubuntu Linux
  • Deployment platform: MongoDB on dedicated and virtual servers at SoftLayer
  • Server hardware configuration: 30 MongoDB servers with a range of different configurations based on use case (75/25 split between dedicated hardware and virtualized instances depending on the performance requirements)
  • Xen virtualization
  • The primary data store is on dedicated hardware and SSDs; everything is kept in RAM for the primary data store
  • Replica Sets: completely automatic failover across 2 different data centers in U.S. (varying replica set and shard sizes from single member up to 5 member sets. Multiple clusters with zero shards up to 4 shards)
  • Provisioning and content management: Puppet
  • Monitoring: Server Density, MongoDB Management Service, New Relic
  • Data size: ingesting 12 terabytes/month; 3,000-4,000 updates/second

Results

With MongoDB, Server Density’s performance has increased (in many cases, query times are significantly faster than with MySQL), disk usage has decreased, and the company is in a good position to continue their scaling plans.

All of the company’s developers have significant experience with MongoDB, making it easy to resolve challenges that may arise without having to learn new protocols, and easier to debug the application without having to look in multiple places. “We don’t have to spend time learning other technologies, and we can assume we’ll get good performance with the same kind of queries we’re used to,” said Mytton.

Additionally, MongoDB cuts costs and frees up resources by simplifying Server Density’s infrastructure and enabling developers to interact directly with the database. “We can focus on system improvements rather than thinking up clever ways to store data,” said Mytton. As a result, Server Density can deliver new functionality more quickly to accelerate the product roadmap.

Lessons Learned

For developers new to MongoDB, Mytton recommends taking time to understand the implications of sharding and shard keys, as it can be difficult to change after the fact. It is also important to understand write concerns because MongoDB defaults to unsafe but fast writes. This is fully customizable to allow limited durability all the way through to guaranteeing data has been replicated to multiple nodes, and everything in-between. This means you can flexibly choose tradeoffs between durability and performance whilst still using a single data store.

More About MongoDB at Server Density