IBM has invested a great deal of money to harness massive volumes of data. Yet it's telling that in a post about Big Data today, the company chooses to highlight the even greater importance of velocity of data:
True innovators are finding value in even the smallest bytes of data that move very rapidly into and out of the organization. That’s because most organizations will overlook these opportunities, wrongly thinking that because data moves too quickly and can’t be stored, there is no way to analyze it. Analyzing data in motion and capitalizing in the moment is the secret to success in the era of big data. This is where stream computing comes into play.
Stream computing changes where, when and how much of your business data you can analyze. By extracting insight from data as it is in motion, you can react to events as they are happening to reshape business outcomes. Store less, analyze more, and make better decisions, faster. From increased customer retention to earlier fraud detection to more frequent cross-selling, the benefits of stream computing are many.
While MongoDB does a great job with data in copious quantities, arguably the better reason to use MongoDB is its ability to process streaming data. We're therefore glad to be working with IBM's InfoSphere team on ways to protect such sensitive, fast-moving corporate data.