Ingesting and processing 100 million events per day is no trivial task. In order to serve up real-time and historical analysis of that data, we integrate Redis, Mongo, Infobright, and Cassandra. We map/reduce, aggregate, TTLs and shard in multiple data centers and data stores. Finally, we wrap that up into a Node.JS API for speed, consistent access patterns, and fault tolerance. This makes up a full data access and storage suite. To use each data store effectively, you need to know which tradeoffs to make and what each data storage layer excels at. This talk is the evolution of the architecture, tools, capabilities, and lessons learned for our version of Polyglottany.