Leaf in the Wild: GeistM Uses Atlas to Thrive in MarTech
February 4, 2020
Can you start by telling us a little bit about your company?
GeistM is a hyper growth marketing technology (“MarTech”) business that provides a full, turnkey and innovative solution for digital marketing. We are the world’s fastest growing MarTech platform, listed in The Inc. 500 and Crain’s Fast 50, with clients including LG, Procter & Gamble, and HelloFresh. We have amassed 400B impressions resulting in 1.4BN for our clients.
We deliver performance driven marketing at scale because of our deep and wide use of technology, embodied in our integrated MarTech business operations platform—Blackfire. Blackfire is a proprietary technology that has distinct modules: Campaign Creation, Data Aggregator, Creative Asset Manager, & Attribution Analytics. All of these modules allow Geist to provide our clients with best in class marketing data, using these insights to further drive performance of our marketing campaigns.
How are you using MongoDB?
MongoDB is the database on which Blackfire is built, and as we extend to related applications, we continue to use it. MongoDB allows us to easily manage very large amounts of data that must be captured and enhanced within single collections, using other collections in the enhancement process.
What were you using before MongoDB?
Blackfire was first prototyped with MySQL and Keen.io. Keen provided a hosted platform tailor-made for event capture, however, we quickly outgrew it and MySQL became a nuisance with our fluid data needs. At that point, we considered Cassandra, Redshift, and RDS, but the utter simplicity and independence of MongoDB had us hooked—within a day the best choice was obvious.
We migrated to MongoDB and managed it ourselves for one year with a self hosted MongoDB installation, but turned to MongoDB Atlas as our rapid growth became more daunting than ever.
Can you describe your MongoDB deployment and application stack?
Can you share any best practices on scaling MongoDB or operations?
MongoDB’s ability to support multiple databases within a single cluster has allowed us to partition our large volume collections by client, keeping our environment simpler than it would be if we sharded, and offering increased client privacy at the same time. Using a recent subset of very large historical collections has also improved performance.
We found that by casting our data in multiple ways, for different consumption requirements, we can vastly improve availability and performance. Data storage is cheap, but search is expensive. We move our data through multiple states, effectively answering questions before they are asked at the expense of duplicative storage.
How is MongoDB performing for you?
It meets most of our needs so we are happy with it. Naturally, there are performance and scaling challenges. However, we have always been able to figure out a way to address these challenges in a timely fashion.
Can you elaborate on how you use the MongoDB query language and aggregation framework?
We consistently need to search through hundreds of millions, if not billions, of records spread across multiple collections in almost real time. This would be impossible without the MongoDB query language and aggregation framework.
The many high level aggregation operations that mirror typical data analysis requirements greatly reduces the amount of data manipulation code we need to write. This results in enhanced programmer efficiency, fewer programming errors, and fewer optimization concerns than if these operations had to be created by combining primitive operations.
What’s the biggest impact MongoDB is having on your business?
MongoDB facilitates running our business. We can rely on it and it delivers. We feel that very few options would be as effective as MongoDB and the investment in a category leader certainly minimizes risk. MongoDB’s accommodation of structural database changes during runtime has greatly decreased time to deliver the necessary changes. They also offer us advice, services, as well as excellent online courses through MongoDB University and frequent informational updates.
Do you have plans to use MongoDB for other applications?
Blackfire includes a data fabric subsystem for collecting data. The data collection process begins with data being collected from both internal and external sources. After this, data is normalized around a canonical internal form. Finally, the needed data is distributed to other systems for their use.
The fabric is unique in that the mappings, views and procedures, as well as the business collections, are defined within the system itself, and can be extended or modified at runtime. We are beginning the process of making this data fabric available to GiestM sister companies as a separate system.
What advice would you give someone who is considering using MongoDB for their next project?
Don’t write off Atlas because it is a bit cheaper to host it yourself. Atlas provides a great deal of support as well as backups, redundancy etc. My second piece of advice would be to make sure your data model reflects the business you are supporting. There are many reasons for not enforcing a schema on a collection as data requirements evolve, but it’s still important to have a structure that drives your data. If you create ad-hoc documents and collections, your application will get messy very quickly and you’ll miss out on optimization opportunities like the ability to organize your system into microservices.
Is there anything else you’d like to share about GeistM?
In addition to the Bionic nature of Blackfire, the secret to Geist’s success is that we are a teaching organization. Our people spend a significant amount of their time helping others learn new things, and whenever a new technique is discovered, it is shared with all concerned parties. This applies equally to our technology team, so our use of MongoDB continues to evolve through our interactions with each other.