Why Open Source Is Essential To Big Data
Gartner analyst Merv Adrian recently highlighted some of the recent movements in Hadoop Land, with several companies introducing products "intended to improve Hadoop speed." This seems odd, as that wouldn't be my top pick for how to improve Hadoop or, really, most of the Big Data technologies out there. By many accounts, the biggest need in Hadoop is improved ease of use, not improved performance, something Adrian himself confirms : Hadoop already delivers exceptional performance on commodity hardware, compared to its stodgy proprietary competition. Where it's still lacking is in ease of use. Not that Hadoop is alone in this. As Mare Lucas asserts , Today, despite the information deluge, enterprise decision makers are often unable to access the data in a useful way. The tools are designed for those who speak the language of algorithms and statistical analysis. It’s simply too hard for the everyday user to “ask” the data any questions – from the routine to the insightful. The end result? The speed of big data moves at a slower pace … and the power is locked in the hands of the few. Lucas goes on to argue that the solution to the data scientist shortage is to take the science out of data science; that is, consumerize Big Data technology such that non-PhD-wielding business people can query their data and get back meaningful results. The Value Of Open Source To Deciphering Big Data Perhaps. But there's actually an intermediate step before we reach the Promised Land of full consumerization of Big Data. It's called open source. Even with technology like Hadoop that is open source yet still too complex, the benefits of using Hadoop far outweigh the costs (financial and productivity-wise) associated with licensing an expensive data warehousing or analytics platform. As Alex Popescu writes , Hadoop "allows experimenting and trying out new ideas, while continuing to accumulate and storing your data. It removes the pressure from the developers. That’s agility." But these benefits aren't unique to Hadoop. They're inherent in any open-source project. Now imagine we could get open-source software that fits our Big Data needs and is exceptionally easy to use plus is almost certainly already being used within our enterprises...? That is the promise of MongoDB, consistently cited as one of the industry's top-two Big Data technologies . MongoDB makes it easy to get started with a Big Data project. Using MongoDB To Innovate Consider the City of Chicago. The Economist wrote recently about the City of Chicago's predictive analytics platform, WindyGrid. What The Economist didn't mention is that WindyGrid started as a pet project on chief data officer Brett Goldstein's laptop. Goldstein started with a single MongoDB node, and iterated from there, turning it into one of the most exciting data-driven applications in the industry today. Given that we often don't know exactly which data to query, or how to query, or how to put data to work in our applications, this is precisely how a Big Data project should work. Start small, then iterate toward something big. This kind of tinkering simply is difficult to impossible with a relational database, as The Economist's Kenneth Cukier points out in his book, Big Data: A Revolution That Will Transform How We Live, Work, and Think : Conventional, so-called relational, databases are designed for a world in which data is sparse, and thus can be and will be curated carefully. It is a world in which the questions one wants to answer using the data have to be clear at the outset, so that the database is designed to answer them - and only them - efficiently. But with a flexible document database like MongoDB, it suddenly becomes much easier to iterate toward Big Data insights. We don't need to go out and hire data scientists. Rather, we simply need to apply existing, open-source technology like MongoDB to our Big Data problems, which jibes perfectly with Gartner analyst Svetlana Sicular's mantra that it's easier to train existing employees on Big Data technologies than it is to train data scientists on one's business. Except, in the case of MongoDB, odds are that enterprises are already filled with people that understand MongoDB, as 451 Research's LinkedIn analysis suggests: In sum, Big Data needn't be daunting or difficult. It's a download away.
MongoDB: Jobs tell a growing community story
Community is the lifeblood of an open-source project, and so measuring the size and vitality of a given open-source community is critical when making the decision to adopt a particular project. After all, the greater the number of developers contributing to a project and the greater the number of users putting the software through its paces, the safer your investment will be in that open-source project. Unfortunately, there's no One True Way to measure open-source community. In fact, there are many ways to measure community around an open-source project, each with its own strengths and shortcomings. Still, there are reasonable metrics to approximate community around an open-source project. Of the various options, one of the most robust is job trend data. Here, new data released by 451 Research makes it clear that MongoDB is ...way-ahead of all the othersÃ¢â‚¬Â_[and] outpacing many of its rivals,â€œ in terms of jobs created and posted. 451 Research pulls this jobs data from LinkedIn. Indeed.com, however, comes to the same conclusion based on different jobs sources: MongoDB Job Trends Mongodb jobs Job trends are not the only way to measure the vibrancy of an open-source project, of course. Acquia, for example, has its own approach to measuring the strength of the Drupal community . And it's pretty telling that within hours of announcing that 10gen would be offering free online MongoDB training , thousands of people signed up, with thousands more joining every day since. But job trends may be an even better indicator, because they indicate real dollars being spent on real people to build real applications. Early in the commercial lifespan of open source, we used downloads to track open-source community success. There were a variety of problems with this approach, not the least reason being the dispersal of open-source projects from Sourceforge to a number of new competitors, most recently GitHub. Other means have arisen, as 10gen President Max Schireson describes in a 2011 blog post. Of the different means Schireson outlines (forum posts, Google Insights, job postings), the job postings may actually be the clearest indication of serious community traction. Marten Mickos, current CEO of Eucalyptus Systems and former CEO of MySQL, used to argue that in a given open-source community, for every 1,000 users there were maybe 100 developers contributing to the project and one paying customer. Job trends data are a great way to cut across these categories and measure real interest in an open-source project, minus the noise that Google Insights offers or the gamesmanship that downloads affords. Perfect? No. But in the absence of perfection jobs data are a solid way to measure the size and seriousness of community around an open-source project. Tagged with: MongoDB, community, 451 Research