Data Scientist Shortage? There's An App For That
Big Data is all the rage, but apparently will come to a crashing halt due to a shortage of data scientists. As I've argued elsewhere , this is mostly a sham. Context is critical for making use of a company's data, and the people with context already work for the enterprise. So it becomes a matter of training the people one has, rather than going off on a scouting trip for the mythical data scientist. Nor will the "science" of Big Data remain such for long, according to IBM's James Kobielus . As he notes, "core data scientist aptitudes -- curiosity, intellectual agility, statistical fluency, research stamina, scientific rigor, skeptical nature -- are widely distributed throughout workforces everywhere." He then points to a few key trends that will make data science less of a science: As more data discovery, acquisition, preparation, and modeling functions are automated through better tools, today's data scientists will have more time for the core of their jobs: statistical analysis, modeling, and interaction exploration. Data scientists are developing fewer models from scratch. That's because more and more big data projects run on application-embedded analytic models integrated into commercial solutions.... Open source communities and tools will greatly expand the pool of knowledgeable, empowered data scientists at your disposal, either as employees or partners. This jibes with Cloudera CEO Mike Olson's contention that "There will be enormous Hadoop adoption, but you'll get it by virtue of the applications you run." But whether an organization interprets its data through applications or directly using open-source technologies, one thing that remains true in all this: people are critical to making sense of Big Data. The data won't speak for itself. It's therefore critical to find people inside one's organization who can help make sense of the organization's data. The good news? They're already available and on the payroll.
Why Open Source Is Essential To Big Data
Gartner analyst Merv Adrian recently highlighted some of the recent movements in Hadoop Land, with several companies introducing products "intended to improve Hadoop speed." This seems odd, as that wouldn't be my top pick for how to improve Hadoop or, really, most of the Big Data technologies out there. By many accounts, the biggest need in Hadoop is improved ease of use, not improved performance, something Adrian himself confirms : Hadoop already delivers exceptional performance on commodity hardware, compared to its stodgy proprietary competition. Where it's still lacking is in ease of use. Not that Hadoop is alone in this. As Mare Lucas asserts , Today, despite the information deluge, enterprise decision makers are often unable to access the data in a useful way. The tools are designed for those who speak the language of algorithms and statistical analysis. It’s simply too hard for the everyday user to “ask” the data any questions – from the routine to the insightful. The end result? The speed of big data moves at a slower pace … and the power is locked in the hands of the few. Lucas goes on to argue that the solution to the data scientist shortage is to take the science out of data science; that is, consumerize Big Data technology such that non-PhD-wielding business people can query their data and get back meaningful results. The Value Of Open Source To Deciphering Big Data Perhaps. But there's actually an intermediate step before we reach the Promised Land of full consumerization of Big Data. It's called open source. Even with technology like Hadoop that is open source yet still too complex, the benefits of using Hadoop far outweigh the costs (financial and productivity-wise) associated with licensing an expensive data warehousing or analytics platform. As Alex Popescu writes , Hadoop "allows experimenting and trying out new ideas, while continuing to accumulate and storing your data. It removes the pressure from the developers. That’s agility." But these benefits aren't unique to Hadoop. They're inherent in any open-source project. Now imagine we could get open-source software that fits our Big Data needs and is exceptionally easy to use plus is almost certainly already being used within our enterprises...? That is the promise of MongoDB, consistently cited as one of the industry's top-two Big Data technologies . MongoDB makes it easy to get started with a Big Data project. Using MongoDB To Innovate Consider the City of Chicago. The Economist wrote recently about the City of Chicago's predictive analytics platform, WindyGrid. What The Economist didn't mention is that WindyGrid started as a pet project on chief data officer Brett Goldstein's laptop. Goldstein started with a single MongoDB node, and iterated from there, turning it into one of the most exciting data-driven applications in the industry today. Given that we often don't know exactly which data to query, or how to query, or how to put data to work in our applications, this is precisely how a Big Data project should work. Start small, then iterate toward something big. This kind of tinkering simply is difficult to impossible with a relational database, as The Economist's Kenneth Cukier points out in his book, Big Data: A Revolution That Will Transform How We Live, Work, and Think : Conventional, so-called relational, databases are designed for a world in which data is sparse, and thus can be and will be curated carefully. It is a world in which the questions one wants to answer using the data have to be clear at the outset, so that the database is designed to answer them - and only them - efficiently. But with a flexible document database like MongoDB, it suddenly becomes much easier to iterate toward Big Data insights. We don't need to go out and hire data scientists. Rather, we simply need to apply existing, open-source technology like MongoDB to our Big Data problems, which jibes perfectly with Gartner analyst Svetlana Sicular's mantra that it's easier to train existing employees on Big Data technologies than it is to train data scientists on one's business. Except, in the case of MongoDB, odds are that enterprises are already filled with people that understand MongoDB, as 451 Research's LinkedIn analysis suggests: In sum, Big Data needn't be daunting or difficult. It's a download away.
The 'middle class' of Big Data
So much is written about Big Data that we tend to overlook a simple fact: most data isn’t big at all. As Bruno Aziza writes in Forbes , “it isn’t so” that “you have to be Big to be in the Big Data game,” echoing a similar sentiment from ReadWrite ’s Brian Proffitt . Large enterprise adoption of Big Data technologies may steal the headlines, but it’s the “middle class” of enterprise data where the vast majority of data, and money, is. There’s a lot of talk about zettabytes and petabytes of data, but as EMA Research highlights in a new study, “Big Data’s sweet spot starts at 110GB and the most common customer data situation is between 10 to 30TB.” Small? Not exactly But Big? No, not really. Couple this with the fact that most businesses fall into the 20-500-employee range , as Intuit CEO Brad Smith points out , and it’s clear that the biggest market opportunity for Big Data is within the big pool of relatively small enterprises with relatively small data sets. Call it the vast middle class of enterprise Big Data. Call it whatever you want. But it’s where most enterprise data sits. The trick is to first gather that data, and then to put it to work. A new breed of “data-science-as-a-service” companies like Metamarkets and Infochimps has arisen to lower the bar to culling insights from one’s data. While these tools can be used by enterprises of any size, I suspect they’ll be particularly appetizing to small-to-medium sized enterprises, those that don’t have the budget or inclination to hire a data science. (This might be the right way to go, anyway, as Gartner highlights : “Organizations already have people who know their own data better than mystical data scientists.” What they really need is access to the data and tools to process it.) Intriguingly, here at 10gen we’ve seen a wide range of companies, large and small, adopt MongoDB as they build out data-centric applications, but not always with Big Data in mind. In fact, while MongoDB and Hadoop are top-of-mind for data scientists and other IT professionals, as Wikibon has illustrated , many of 10gen’s smaller customers and users aren’t thinking about Big Data at all. Such users are looking for an easy-to-use, highly flexible data store for their applications. The fact that MongoDB also has their scalability needs covered is a bonus, one that many will unlock later into their deployment when they discover they’ve been storing data that could be put to use. In the RDBMS world, scale is a burden, both in terms of cost (bigger scale = bigger hardware = bigger license fees). Today, with NoSQL, scale is a given, allowing NoSQL vendors like 10gen to accentuate scalability with other benefits. It’s a remarkable turn of events for technology that emerged from the needs of the web giants to manage distributed systems at scale. We’re all the beneficiaries. Including SMBs. We don’t normally think about small-to-medium-sized businesses when we think of Big Data, but we should. SMBs are the workhorse of the world’s economies, and they’re quietly, collectively storing massive quantities of data. The race is on to help these companies put their comparatively small quantities of data to big use. It’s a race that NoSQL technologies like MongoDB are very well-positioned to win. Tagged with: MongoDB, big data, SMB, Hadoop, rdbms, Infochimps, Metamarkets, Gartner, Wikibon, data scientist