ease of use

7 results

Making MongoDB Deployment Even Easier With Bitnami's One-Click Deployment Tool

One of the primary reasons for MongoDB's popularity is what a friend of mine calls "developer ergonomics." Simply put, MongoDB is very easy to install, configure, maintain and use. In partnership with Bitnami, the MongoDB development experience just got even better. Given MongoDB's popularity with web developers, it's increasingly deployed in conjunction with a few popular web application frameworks. Dubbed the MEAN stack , it includes MongoDB, ExpressJS, AngularJS and Node.js. Bitnami has taken the individual components and removed friction to getting them to work seamlessly together by building a one-click deployment tool that allows developers to deploy and manage either on-premise, through Amazon Web Services (AWS) or Windows Azure. MongoDB recently raised $150 million to help us accelerate further improvements to MongoDB , including to operational aspects of the MongoDB experience. But MongoDB is a community. As important as the work is that we do on the kernel and other elements of the leading NoSQL database, we rely heavily on our community to improve the MongoDB experience. Bitnami has been making open-source software deployment easy for years, offering open-source applications and development stacks that have been pre-integrated and configured to be run in the cloud or on-premise. Now that Bitnami experience comes to MongoDB. We welcome Bitnami to the MongoDB community, and welcome what it is doing for the MongoDB community. Techcrunch's Alex Williams offers more color on the announcement, here .

November 14, 2013

Why Open Source Is Essential To Big Data

Gartner analyst Merv Adrian recently highlighted some of the recent movements in Hadoop Land, with several companies introducing products "intended to improve Hadoop speed." This seems odd, as that wouldn't be my top pick for how to improve Hadoop or, really, most of the Big Data technologies out there. By many accounts, the biggest need in Hadoop is improved ease of use, not improved performance, something Adrian himself confirms : Hadoop already delivers exceptional performance on commodity hardware, compared to its stodgy proprietary competition. Where it's still lacking is in ease of use. Not that Hadoop is alone in this. As Mare Lucas asserts , Today, despite the information deluge, enterprise decision makers are often unable to access the data in a useful way. The tools are designed for those who speak the language of algorithms and statistical analysis. It’s simply too hard for the everyday user to “ask” the data any questions – from the routine to the insightful. The end result? The speed of big data moves at a slower pace … and the power is locked in the hands of the few. Lucas goes on to argue that the solution to the data scientist shortage is to take the science out of data science; that is, consumerize Big Data technology such that non-PhD-wielding business people can query their data and get back meaningful results. The Value Of Open Source To Deciphering Big Data Perhaps. But there's actually an intermediate step before we reach the Promised Land of full consumerization of Big Data. It's called open source. Even with technology like Hadoop that is open source yet still too complex, the benefits of using Hadoop far outweigh the costs (financial and productivity-wise) associated with licensing an expensive data warehousing or analytics platform. As Alex Popescu writes , Hadoop "allows experimenting and trying out new ideas, while continuing to accumulate and storing your data. It removes the pressure from the developers. That’s agility." But these benefits aren't unique to Hadoop. They're inherent in any open-source project. Now imagine we could get open-source software that fits our Big Data needs and is exceptionally easy to use plus is almost certainly already being used within our enterprises...? That is the promise of MongoDB, consistently cited as one of the industry's top-two Big Data technologies . MongoDB makes it easy to get started with a Big Data project. Using MongoDB To Innovate Consider the City of Chicago. The Economist wrote recently about the City of Chicago's predictive analytics platform, WindyGrid. What The Economist didn't mention is that WindyGrid started as a pet project on chief data officer Brett Goldstein's laptop. Goldstein started with a single MongoDB node, and iterated from there, turning it into one of the most exciting data-driven applications in the industry today. Given that we often don't know exactly which data to query, or how to query, or how to put data to work in our applications, this is precisely how a Big Data project should work. Start small, then iterate toward something big. This kind of tinkering simply is difficult to impossible with a relational database, as The Economist's Kenneth Cukier points out in his book, Big Data: A Revolution That Will Transform How We Live, Work, and Think : Conventional, so-called relational, databases are designed for a world in which data is sparse, and thus can be and will be curated carefully. It is a world in which the questions one wants to answer using the data have to be clear at the outset, so that the database is designed to answer them - and only them - efficiently. But with a flexible document database like MongoDB, it suddenly becomes much easier to iterate toward Big Data insights. We don't need to go out and hire data scientists. Rather, we simply need to apply existing, open-source technology like MongoDB to our Big Data problems, which jibes perfectly with Gartner analyst Svetlana Sicular's mantra that it's easier to train existing employees on Big Data technologies than it is to train data scientists on one's business. Except, in the case of MongoDB, odds are that enterprises are already filled with people that understand MongoDB, as 451 Research's LinkedIn analysis suggests: In sum, Big Data needn't be daunting or difficult. It's a download away.

May 2, 2013

MongoDB powers Mappy Health's tweet-based disease tracking

Twitter has come a long way from being the place to read what your friends ate for dinner last night (though it still has that). Now it’s also a place where researchers can track the ebb and flow of diseases, and take appropriate action. In early 2012, the U.S. Department of Health and Human Services challenged developers to design applications that use the free Twitter API to track health trends in real time. With $21,000 in prize money at stake, Charles Boicey , Chief Innovation Officer of Social Health Insights, and team got started on the Trending Now Challenge , and ultimately won with its MongoDB-powered solution, Mappy Health . Not bad, especially since the small team had only three weeks to put together a solution. Choosing a Database MongoDB was critical to getting the application done well, and on time, as Boicey tells it, MongoDB is just a wonderful environment in which to work. What used to take weeks with relational database technology is a matter of days or hours with MongoDB. Fortunately, Boicey had a running start. Having used MongoDB previously in a healthcare environment, and seeing how well it had ingested health information exchange data in an XML format, Boicey felt sure MongoDB could manage incoming Twitter data. Plus, Mappy Health needed MongoDB’s geospatial capabilities so as to be able to track diseases by location. Finally, while the team evaluated other NoSQL options, “MongoDB was the easiest to stand up” and is “extremely fast.” To make the development process even more efficient, Mappy Health runs the service on Amazon EC2. Processing the Data While UCI has a Hadoop ecosystem Mappy Health could have used, the team found that for processing real-time algorithms and MapReduce jobs, they run much faster on MongoDB, and so runs MapReduce within MongoDB, yielding insights like this: As Boicey notes, Writing MapReduce jobs in Javascript has been fairly simple and allows us to cache collections/hashes of data frequently displayed on the site easily using a Memcached middleman between the MongoDB server and the Heroku-served front-end web app. This jibes well with Mappy Health’s overall rationale for choosing MongoDB: MongoDB doesn’t require a lot of work upfront (e.g., schema design - “doing the same thing in a relational database would require a lot of advance planning and then ongoing maintenance work like updating tables) and MongoDB works really well and scales beautifully Since winning the Trending Now Challenge, Mappy Health has been working with a number of other organizations. We look forward to even bigger and better things from this team. Imagine what they could do if given a whole four weeks to build an application! Tagged with: Mappy Health, case study, disease tracking, US Department of Health and Human Services, flexibility, ease of use, Amazon, EC2, dynamic schema

March 18, 2013

Pearson / OpenClass Uses MongoDB for Social Learning Platform

We recently spoke with Brian Carpio of Pearson about OpenClass , a new project from Pearson with deep Google integration. What is OpenClass? OpenClass is a dynamic, scalable, fully cloud-based learning environment that goes beyond the LMS. OpenClass stimulates social learning and the exchange of content, coursework, and ideas â€â€ù all from one integrated platform. OpenClass has all the LMS functionality needed to manage courses, but that's just the beginning. Why did you decide to adopt MongoDB for OpenClass? OpenClass leverages MongoDB as one of its primary databases because it offers serious scalability and improved productivity for our developers. With MongoDB, our developers can start working on applications immediately, rather than slogging through the upfront planning and DBA time that relational database systems require. Also, given that a big part of the OpenClass story will be how we integrate with both public and private cloud technologies, MongoDB support for scale-out, commodity hardware is a better fit than traditional scale-up relational database systems that generally must run on big iron hardware. Can you tell us about how you’ve deployed MongoDB? Currently we deploy MongoDB in our world-class datacenters and in Amazon's EC2 cloud environment with future plans to go to a private cloud technologies such as OpenStack. We leverage both Puppet and Fabric for deployment automation and rolling upgrades. We also leverage Zabbix and the mikoomi plugin for monitoring our MongoDB production servers. Currently each OpenClass feature / application leverages its own MongoDB replica set, and we expect to need MongoDB’s sharding features given the expected growth trajectory for OpenClass. What recommendations would you give to other operations teams deploying MongoDB for the first time? Automate everything! Also, work closely with your development teams as they begin to design an application that leverages MongoDB, which is good advice for any new application that will be rolled into production. I would also say to look at Zabbix as it has some amazing features related to monitoring MongoDB in a single replica set or in a sharded configuration that can help you easily identify bottlenecks and identify when it’s time to scale out your MongoDB deployment. Finally, I would suggest subscribing to the #mongodb irc channel, as well as the MongoDB Google Group , and don't be afraid to ask questions. I personally ask a lot of questions in the MongoDB Google Group and receive great answers not only from 10gen CTO Eliot Horowitz , although he does seem to answer a lot of my questions, but from a many other 10gen folks. What is in store for the future with MongoDB at Pearson? Our MongoDB footprint is only going to continue to grow. More and more development teams are playing with MongoDB as the foundation of their new application or OpenClass feature. We are working on migrating functionality out of both Oracle and Microsoft SQL Server to MongoDB where it makes sense to relieve the current stress on those incumbent database technologies. Thanks to Brian for telling us about OpenClass! Brian also blogs at www.briancarpio.com — be sure to check out his posts on MongoDB here and here and here and here and here . Tagged with: case study, Pearson, OpenClass, scalability, flexibility, ease of use

February 28, 2013

Post+Beam's MongoDB-powered innovation factory

When your business is innovation, throttling creativity with rigid, upfront schema design is a recipe for frustration. It’s therefore not surprising that Post+Beam , an innovation and communications “factory,” turned to MongoDB to enable rapid development. Part startup incubator, part branding and communication agency, part development firm, Post+Beam takes ideas and turns them into products. Post+Beam’s first MongoDB-based product is Linea, a cross-platform photo browsing application that extends from web to mobile and enables users to create and share stories through photos, focusing on the photos and the collaboration around them, not photo storage. In talking with lead engineer Jeff Chao, he mentioned MongoDB’s dynamic schema as a primary reason for using the NoSQL database: The most important reason for using MongoDB from the start is rapid development. We wanted to spend just enough development time in spec’ing out a schema so we could get started on writing the application. We were then able to incrementally adjust the schema depending on various technical and non-technical requirements. Another reason for choosing MongoDB is because of its default data representation. We were able to build out an API to allow iOS clients to interact with our web service via JSON. This is particularly interesting given that Post+Beam’s development team has extensive relational database technology. According to Chao, MongoDB’s documentation and community support” made it easy to get up-to-speed. The initial set-up consists of a three-node replica set (for automatic fail-over), all running in one cluster on Amazon EC2. While the team continues to use Postgres for some transactional components of the Linea app, it needed MongoDB’s flexible data model to support its business model, which demands continuous iteration Which, of course, is how innovation happens. Chao noted that Post+Beam plans to expand its use of MongoDB, particularly for those applications that “require a relatively short delivery time combined with requirements that might not be fully matured at the time of the [client] request.” This sounds like most applications, most of the time, in most enterprises. Indeed, this is one of the primary reasons we see for MongoDB’s mass adoption. As our friends at MongoLab say , “It’s a data model thing.” Tagged with: data model, Post+Beam, case study, Linea, innovation, flexibility, replica sets, ease of use

February 19, 2013

Guest post: Nokta.com runs Turkey's Internet on MongoDB About SPP42

This is a guest post by Emrah Ozcelebi, CEO of SPP42 , a leading NoSQL consultancy in Turkey. Nokta , one of the largest Internet companies in Turkey, knows what it means to operate at scale. The Internet leader reaches over 84% of all Turkish Internet users, and its video platform, Izlesene.com , delivers more than 2.7 million videos with over 2 billion page views and significant video views. As a Facebook Timeline launch partner, Nokta’s Izlesene.com also enables significant video sharing on Facebook. Finally, Nokta also operates Turkey’s leading photo sharing site, Foto Kritik , as well as a blogging site, Blogcu , that welcomes more than 13 million unique monthly users. At the heart of all this data is MongoDB. But Nokta got off to a rough start with MongoDB, due primarily to poor configuration and an inappropriate use case. Working together, 10gen and SPP42 were able to turn things around. First we got in touch with Nokta’s game department. Its Facebook implemantation of a local board game, OkeyHane was built on PHP, Java and Flash technologies with an open-source RDBMS as the database back-end. We were able to replace this relational database with MongoDB and significantly improve performance. It didn’t take long for Nokta’s software developers to realize that the flexibility of BSON gives extreme agility to the development team. Soon the MongoDB replicaset behind OkeyHane proved itself to be highly stable in production, in addition to being very easy to maintain and administer a MongoDB replicaset compared to other RDBMS solutions. After MongoDB proved itself stable in the midst of a difficult “war zone,” Nokta decided to extend its adoption by also using MongoDB in its flagship product, Izlesene.com. Nokta also elected to employ MongoDB in its homegrown advertisement platform, which feeds all its sites and delivers ads to 15,000 to 40,000 concurrent users. In order to meet the real-time requirements of the advertisement system, we helped to stabilize MongoDB installations. The middleware is built with the Akka concurrent programming framework with Scala language, with Spray being used as Rest API layer. We worked with great guys from Nokta.com like Erdem Agaoglu (@agaoglu) and Hakan Kocakulak who are also highly skilled in Hadoop and HBase. After the proven success of battle-hardened MongoDB installations in the ad-serving application, the Izlesene.com developers became more eager to use MongoDB for storing metadata about users and videos. Nokta is now planning to replace all of its open-source RDBMS implementations with MongoDB. Of course, at that level of traffic, there is no single silver bullet to solve all problems. The skilled development team is aware of that and willing to try new technologies. SPP42 and Nokta are working together to deliver better services to Nokta’s users by combining different NoSQL solutions such as Hadoop and Neo4J. With help from 10Gen, we are able to offer better, integrated solutions to meet Nokta’s demands. There is a great wind filling NoSQL’s sails in Turkey. Although adoption is still at a very early stage, we are finding great success (and plenty of MongoDB interest) as a 10gen partner in Turkey. Companies like Nokta are able to achieve serious scale and improved developer productivity with MongoDB, helped by working with an experienced local partner like SPP42. SPP42 is a Turkey-based consulting and training company specializing in decision support systems and business intelligence. Since its founding, SPP42 has delivered top-level open source consultancy and training services - mainly Java, Pentaho, Jasper and Python solutions over OpenStack, OpenShift and MongoDB and other NoSQL solutions. SPP42’s services include end-to-end integration solutions, from development and architecture to implementation. SPP42 works with Turkey’s leading companies and helps them stay on the bleeding edge of technological innovation. We help them plan the migration from its existing technologies to newer ones so that our customers are always competitive globally. Tagged with: guest post, scalability, Scala, RDBMS, Turkey, SPP42, partner, ease of use, developer productivity

February 13, 2013

Technology Adoption and the Power of Convenience

Just as the ink was drying on my ReadWrite piece on how the convenience of public cloud computing is steamrolling over concerns about security and control, Redmonk ÃÂ_ber-analyst Stephen O’Grady posts an exceptional review of why we should “not underestimate the power of convenience.” As he writes: One of the biggest challenges for vendors built around traditional procurement patterns is their tendency to undervalue convenience. Developers, in general, respond to very different incentives than do their executive purchasing counterparts. Where organizational buyers tend to be less price sensitive and more focused on issues relating to reliability and manageability, as one example, individual developers tend to be more concerned with cost and availability - convenience, in other words. Because you are who you build for, then, enterprise IT products tend to be more secure and compliant and less convenient than developer-oriented alternatives. None of which would be a problem for old-guard IT vendors if developers, not to mention line of business executives, didn’t have increased control over what gets used in the enterprise. From open source to SaaS, legacy procurement processes are fracturing in the face of developers, in particular, building what they want when they want. Because of the cloud. Because of open source. Because of convenience. O’Grady points to a variety of technologies, including MongoDB, Linux, Chef/Puppet, Git, and dynamic programming languages, that have taken off because they’re so easy to use compared to legacy (and often proprietary) incumbents. Most are open source but, as I point out in my ReadWrite article, “open” isn’t always required. Microsoft SharePoint and Salesforce.com, for example, are both proprietary but also easier to adopt than the crufty ECM and on-premise CRM systems they displaced. The key, again, is convenience. It’s one of the things that drew me to 10gen. MongoDB isn’t perfect, but its data model makes life so easy on developers that its adoption has been impressive. That flexibility and ease of use is why MTV and others have embraced MongoDB. With convenience comes adoption, and with adoption comes time to resolve the issues any product will have. Most recently, this has resulted in 10gen removing MongoDB’s global write-lock in MongoDB version 2.2 , as well as changing the default write behavior with MongoClient . All while growing community and revenues at a torrid pace. Back to O’Grady. As he concludes, “with developers increasingly taking an active hand in procurement, convenience is a dangerous feature to ignore.” I couldn’t agree more. - Posted by Matt Asay, vice president of Corporate Strategy. Tagged with: Stephen O'Grady, Redmonk, convenience, ease of use, flexibility, MTV, global write-lock, developers, Linux, ReadWrite

December 20, 2012