5 results

Social media service Buffer runs MongoDB

It’s not exactly news that Buffer runs MongoDB. In 2011, Buffer co-founder Leo Widrich blogged that Buffer was migrating from MySQL to MongoDB in order to better scale to accommodate user growth. A year and a half later, Buffer has grown to 450,000 users and nearly $1 million in revenue, all with just $400,000 in angel funding (from some pretty impressive angels ). But it’s news to me. I love Buffer and use it daily to schedule tweets. ( @mjasay ) When I found out that one of my favorite services was using MongoDB as its data infrastructure, I just had to call it out. Widrich writes of the migration: We moved from MySQL, a database system, over to another one. All the mechanisms that send your Tweets, store your Twitter accounts, your analytics and more has changed. We moved to a new database system called MongoDB. What this meant in terms of development was to write the whole code for Buffer anew so it works with this new system. The reason we swapped to this new system were mainly growing pains and issues as we are expanding to other networks. Buffer is about to expand to Facebook, Google+ and Linkedin. In order to do this, we had to significantly ramp up our servers and do all the expansion work with a new database. Another key issue was that we just had so many users, that the site slowed down for many of you. MongoDB, the new system, will handle a lot of our scaling issues a lot better than the existing one. All of which is nice, as it gives me bragging rights with my parents. But the truly important thing is the user experience. If you’ve used Buffer, especially lately since the design refresh, you know ( and have said ) that Buffer is awesome. Not awesome because it uses MongoDB. Awesome because Joel, Leo, and Tom, the founder and co-founders, respectively, have built a great service that fills a real need. At 10gen, we’re just happy to be an integral part of the data infrastructure that enables it. But we know that the credit belongs to the Buffer team that have made such a great service. - Posted by Matt Asay, VP of Corporate Strategy Tagged with: Buffer, case study, analytics, mysql

January 23, 2013

MongoDB at OrderGroove

We recently chatted with VP of Engineering, J orge Escobar, about his experience using MongoDB for fast growing SaaS startup OrderGroove. Tell us a little about OrderGroove. Founded in 2008, OrderGroove is a platform that allows online consumers to subscribe to frequently purchased products via the website they're ordering from. When purchasing products from an e-commerce website, the OrderGroove platform injects, via JavaScript, content within the purchase flow that allows the consumer to receive the product with a specific frequency as well as fully control his subscription program via a personalized control panel. What technologies are you using and how does MongoDB fit into your stack: OrderGroove is a write-heavy application written in Python and Django. We started with a MySQL master and two slaves to allow for the rapid iterations we experienced initially as we developed and refined the platform. After we experienced a surge in new client signings (especially with clients that had significant traffic) we determined that the setup would prove to be insufficient if the growth trends continued at the pace we were experiencing. At that moment my team and I discussed the options that we could use to distribute writes horizontally. We were ready to write something from scratch that would involve application-level MySQL sharding. However, I had played with MongoDB a few years back and we decided to give the technology a second look. MongoDB proved to be the perfect fit. It allowed us to grow (or shrink) elastically as our merchant needs fluctuate, especially around seasonal events, without having us to develop and maintain custom data store layers. It sounds like you have a lot of data! Can you tell us about your experiences using sharding? More than just the amount of data, our challenge was how to write fast enough. Sharding was the perfect solution for that challenge. Setting up sharding took some analysis, as you want to have a horizontally spread sharding index. Thanks to a consultation with Sarah Branfman and one of the 10gen engineers, we quickly knew we were in the right track. Our sharding index is a combination of merchant, session id and a timestamp. Four months after we launched the first batch of clients on MongoDB, we faced an incredible growth in traffic with new merchants that had much higher levels of traffic than we had seen before. It was just a matter of adding new replica sets and modifying the sharding configuration, and our writes were once again under control. Now we are constantly monitoring our cluster using 10gen’s MMS so that we’re prepared ahead of time for traffic growth patterns. What advice would you give to other ecommerce startups using MongoDB? MongoDB is the right solution for OrderGroove’s needs. If you have write heavy applications, MongoDB is a no-brainer. One thing to take into consideration is that data is eventually consistent, so we use MongoDB mostly for our front end tracking, but still use MySQL for the transactional data processes and most probably will continue doing so. It's important to pick the right data store model for your needs (or a combination of them). I wrote about this on my jungleG blog ( ) Another strong recommendation I would give to any company that’s going to use MongoDB is to take the time to install the MMS agents. It doesn’t take more than 15 minutes and it allows you to keep an eye on everything that’s happening with your MongoDB nodes and also helps the 10Gen team to debug any issues you might be facing by just sharing the data with them using your client id. We’ve been able to identify potential issues before they became real ones just by monitoring MMS. Any exciting plans for the future that you would like to share? We are looking closely at the new aggregation framework in MongoDB as we still generate analytics using MySQL using regular computation. With the aggregation framework, we could potentially handle more data and accelerate delivery even further for clients. Tagged with: MMS, MongoDB, sharding, aggregation framework, MySQL

September 10, 2012

Getting to know Geospatial Indexing on MongoDB with TST

With built-in geospatial indexing, MongoDB is an excellent option for storing and querying location data. But how are these capabilities being used in the real world? We decided to discuss how one of 10gen's partners, Thermopylae Sciences + Technology (TST), have been leveraging MongoDB for customers in the Federal and Commercial markets. Who is TST? TST is a 10gen partner that focuses on implementing MongoDB for customers in the Federal and Commercial sector. We are avid open source supporters and have contributed additional tools and capabilities to mongoDB and other OSS projects. How would you define Geospatial? Geospatial deals with a few different key areas. The underlying and foundational layer of geospatial is generally known as a map or an image. Managing large amounts of data in this foundational layer has the potential to reach multi-petabyte scale quite quickly. The second element of geospatial is data that helps define the foundational layer - generally this is terrain elevation data or 3D building (urban terrain). The data on top of the previous layers - the static information - is another component and includes information that does not change frequently such as activities, frequency of activities, reports on events at a certain place and time, etc. Static information includes historic data, such as how many robberies occurred in your neighborhood over the previous 5 years. /p> The final set of data is dynamic geo data , which deals with things that are moving such as tracking your friends on Google Latitude. The military has Common Operating Picture displays that show the position of all their troops in a region - another example of dynamic data. As geospatial data management capabilities continue to evolve, the dynamic geo data space is growing considerably. Why MongoDB for Geospatial? As a document oriented storage system , MongoDB allows you to store complex data types. In contrast to distributed hash tables, key value stores, or relational databases that only permit storage of simple types, MongoDB DOES allow for the complex data types. The real value for TST is the ability to support operations-focused customers. These folks, usually in our US Military or other Intelligence Community agencies, have geospatial data that number into the tens of millions of objects, including hyperspectral data and spatio-temporal data that changes rapidly. This volume of data requires the scale-out functionality that MongoDB provides. MongoDB is easy to extend and configure from a developer's perspective, which leads to rapid development. And the technology is schema flexible - with a query language retaining some of the properties of SQL Additionally, we have found the efficient memory management design that MongoDB provides complements our approach well. This enables the storage of the entire index directly within memory as opposed to extensive memory to disk swapping, which hampers write performance. Outside of customer application development, how else does TST use MongoDB for geo? TST has three primary capabilities that currently leverage MongoDB. iSpatial is a geospatial framework that uses MongoDB as one of its indexes for data. As improvements to MongoDB's geospatial management features are contributed, TST will continuously enhance iSpatial to take advantage of those features. iHarvest is a tool that uses MongoDB to monitor hundreds of millions of events on a network. Those events could be carried out by system users, sensors on network, or hardware devices on network. iHarvest creates an individual model for each actor and can then conduct a variety of analysis that helps protect against insider threat, promotes collaboration among individuals with common interests, provides organization heads a roll-up of what their organization is working on, and alerts the users within an organization to new data that they might be interested in reviewing. Ubiquity is TST's mobile software framework that leverages MongoDB through iSpatial. This software enables every day users to drag and drop widgets into a mobile application container to create their own unique mobile apps. TST has extended MongoDB core codebase with an initial update to allow an abstraction of the current indexing that now supports different indexing structures. This allows a variety of additional spatial capabilities. Specifically, the TST extensions now allow for MongoDB to support 3D and 4D searching on geospatial data. TST plans to continue contributing to the MongoDB core around spatial applications and will be working on indexing hyperspectral data and enhancing the geo sharding capabilities. TST will also be working internally to rotate our development staff through our open source contribution team in our Applied Sciences Group, so that a significant percentage of our top developer are smart on MongoDB and able to contribute updates to the core software. To learn more about TST, visit Also, check out Nicholas Knize presentation from mongoDC on Geospatial Indexing and MongoDB . Tagged with: geospatial indexing, mysql, software, foss, open source, government IT, MongoDB, Mongo, NoSQL, Polyglot persistence, 10gen

July 13, 2012