Case Study

16 results

EA Scores With MongoDB-based FIFA Online 3

Think the World Series is big? Or the Super Bowl? Neither comes close to the billions of people that tune in to watch the World Cup, soccer's (football to everyone else outside North America) quadrennial event. But what about the most played game in your household? That’s likely Electronic Arts’ EA Sports FIFA, the world's best-selling sports video game franchise . EA Sports FIFA offers otherwise average athletes the chance to take on and beat the world’s best, weaving intricate passing plays and mastering Messi-esque dribbling with the flick of a controller. All without leaving the comfort of their couch. Not everyone chooses to play FIFA on their XBox or Playstation, however. Throughout Asia one of the most popular ways to bend it like Beckham is with EA’s FIFA Online 3 . The massively multiplayer online soccer game is the most popular sports game in Korea, allowing players to choose to play and customize a team from any of over 30 leagues and 15,000 real world players. Players like Ronaldo. Like Özil. Like Ibrahimovic. Or Park Ji-sung. (More on him later.) Because EA FIFA Online 3, developed by Spearhead , one of EA’s leading development studios, needs to scale to millions of players, Spearhead built the game to run on MongoDB, the industry's most scalable database . EA already runs 250 MongoDB servers, spread across 80 shards. As EA FIFA Online 3 continues to grow in popularity, EA expects MongoDB's autosharding and other features to make it simple to scale. Not content to win accolades on the field, EA FIFA Online 3 has also garnered honors from the industry, most recently winning a MongoDB Innovation Award , due to its creative use of MongoDB. Even better, EA's Spearhead, recipient of the $2,500 award for its work, donated it to the Park Ji-sung JS Foundation. Park, who played for years for Manchester United and tormented Arsenal defenses , “congratulate[d] Spearhead on the great performance of FIFA Online 3,” performance enabled by its underlying MongoDB data infrastructure. In addition to EA, Kixeye and a variety of other gaming companies use MongoDB to improve the gaming experience.

August 14, 2014

How And Why Verizon Wireless Chose MongoDB

Even small organizations struggle with change. But imagine that you have 103 million retail customers, roughly 1700 retail locations to serve them, and $81 billion in revenues at stake. Change necessarily comes hard to a company of that scale and reach. But change is precisely what Verizon Wireless increasingly enables using MongoDB. The Times They Are a-Changing In an organization the size of Verizon Wireless, the business needs are constantly growing and changing, as Shivinder Singh, Senior Systems Architect at Verizon Wireless, told an audience at MongoDB World 2014. These forces push Verizon Wireless to explore new and innovative ways to process manage its data as it seeks to drive greater customer value for its customers. One of those "new and innovative ways" is MongoDB, which helps Verizon Wireless get greater value from its data while simultaneously accelerating time-to-market and improving its asset utilization. As the company looks to augment its existing technologies, however, there's always a fair amount of trepidation, not to mention the ever-looming question: why can't we just do this with the technologies we already own and/or know? Data is changing. The world of relational databases at times doesn't fit the new world of unstructured or semi-structured data. Traditional technologies which at times would require a dedicated resource weeks to setup a environment could be achieved fairly quick with MongoDB. In a certain case, with MongoDB Verizon Wireless was "able to do that in two hours." Even so, Verizon Wireless discovered that one of the biggest challenges in moving to MongoDB was to "unlearn" RDBMS concepts and change the mindset to embrace new MongoDB and NoSQL concepts. But we're getting ahead of ourselves here. How did Verizon Wireless start using MongoDB? Getting Started With MongoDB Verizon Wireless opted to start small with MongoDB, though it did try before it bought, one of the cardinal virtues of open source. (More on that below.) The company decided to augment its employee portal, a business critical application that is "basically the homepage of anyone who works for Verizon." The existing portal was good, but Verizon Wireless wanted to build in new functionality to capture social feeds from Twitter and Facebook and display it specific to that user. Not so easy for a relational database. Originally the development team put MongoDB through its paces, first running a proof of concept and then rolling it out. They didn't have anyone dedicated to supporting it, however, so the development team asked Singh's team to support it. To bring himself up-to-speed with MongoDB, Singh took the route that over 200,000 other people have taken: MongoDB's free online training. As he describes it, within two days he was at a level that he could comfortably manage MongoDB. Within just two weeks he had re-architected Verizon Wireless' entire development set-up to be in a replicated cluster versus a standalone cluster. He then proceeded to test and break the cluster, recover it, test the recovery, test failover capabilities and more. But Singh wasn't done yet. Putting The MongoDB Team To The Test Going with a new technology can be risky, but choosing a new technology vendor to support is perhaps even more so. To minimize that risk, Singh decided to put MongoDB - the company - to the test. So Singh did what any other conscientious would-be buyer would do: He faked his death. Well, not his death, per se, but the death of his server (along with the secondary data center, just to make things doubly interesting). Of course MongoDB would quickly respond to a marquee customer like Verizon Wireless, however, so he also faked his identity, using an email address. In other words, MongoDB's support team got a call from some no-name person with a generic email address claiming "my-server-is-down-the-world-is-on-fire-someone-help-me-NOW!" Within "a short period of time" MongoDB had assembled its engineers to resolve the issue and get Verizon Wireless back on track. Only then did the MongoDB team learn the real identity of Singh and win the deal. The Future Of MongoDB At Verizon Wireless Looking forward, Verizon Wireless has already started a new proof of concept for an online log management system. Not surprisingly, Verizon has "some huge servers, some huge clusters, and all of them generate a huge amount of log data." Given Verizon Wireless' data volumes, it also is looking for ways to pair MongoDB with Hadoop to leverage the strengths of both together. The company has been evaluating the MongoDB Connector for Hadoop . As Verizon Wireless moves forward, Singh notes that MongoDB is appropriate for "quite a lot" of its new use cases, and is therefore being evaluated for these new use cases alongside its traditional RDBMSes. That's a big change for a Fortune 50 enterprise, but Singh believes it's necessary to help the company grow and evolve to meet customer needs.. To view all of Singh's slides: How Verizon Uses Disruptive Developments for Organized Progress from MongoDB To watch the video, please click here .

July 31, 2014

A Mobile-First, Cloud-First Stack at Pearson

Pearson, the global online education leader, has a simple yet grand mission: to educate the world; to have 1 billion students around the globe touching their content on a regular basis. They are growing quickly, especially to emerging markets where the primary way to consume content is via mobile phones. But to reach global users, they need to deploy in a multitude of private and public data centers around the globe. This demands a mobile-first, cloud-first platform, with the underlying goal to improve education efficacy. In 2018, Pearson will be announcing to the public markets what percentage of revenue is associated with the company’s efficacy. There’s no question; that’s a bold move. As a result, apps have to be built in a way to measure how users are interacting with them. Front and center in Pearson’s strategy is MongoDB. With MongoDB, as Pearson CTO Aref Matin told the audience at MongoDB World ( full video presentation here ), Pearson was able to replace silos of double-digit, independent platforms with a consolidated platform that would allow for measuring efficacy. “A platform should be open, usable by all who want to access functionality and services. But it’s not a platform until you’ve opened up APIs to the external world to introduce new apps on top of it,” declared Matin. A key part of Pearson’s redesigned technology stack, MongoDB proved to be a good fit for a multitude of reasons, including its agility and scalability, document model and ability to perform fast reads and ad hoc queries. Also important to Matin was the ability to capture the growing treasure trove of unstructured data, such as peer-to-peer and social interactions that are increasingly part of education. So far, Pearson has leveraged MongoDB for use cases such as: Identity and access management for 120 million user accounts, with nearly 50 million per day at peak; Adaptive learning and analytics to detect, in near real-time, what content is most effective and identify areas for improvement; and The Pearson Activity Framework (akin to a “Google DoubleClick” according to Matin), which collects data on how users interact with apps and feeds the analytics engine. All of this feeds into Matin’s personal vision of increasing the pace of learning. “Increasing the pace of learning will be a a disruptive force,” said Matin. “If you can reduce the length of time spent on educating yourself, you can learn a lot more and not spend as much on it. That will help us be able to really educate the world at a more rapid pace.” **Sign up to receive videos and content from MongoDB World.** MktoForms2.loadForm("//", "017-HGS-593", 1151);

July 31, 2014

Enabling Extreme Agility At The Gap With MongoDB

The Gap's creative director insists that "Fashion is...about instinct and gut reaction." In the competitive world of retail, that "instinct" has been set to fast forward as Gap seeks to outpace fast-fashion retailers and other trends that constantly push Gap and other retailers to meet consumer needs, faster. As boring as it may seem, Gap's purchase order management system really, really matters in ensuring it can quickly evolve to meet consumer tastes. Unable to meet business agility requirements using traditional relational databases, Gap uses MongoDB for a wide range of supply chain systems, including various master data management, inventory and logistics functions, including purchase order management. Collecting Money From Happy Customers This is no small feat given Gap's size. The Gap is a global specialty retailer offering clothing, accessories and personal care products for men, women, children and babies. With nearly 134,000 employees and almost 3,200 company-operated stores and an additional 400 franchise stores, fashion-conscious consumers can find The Gap around the world. And they do, spending over $16 billion annually on Gap's latest track pant, indigo-washed jeans and racerback tanks. That's both the good news and the bad news, as presented by Gap consultant Ryan Murray at MongoDB World. Good, because it means Gap, more than anyone else, dresses America and, increasingly, the world. Bad, because at its scale change can be hard. Square Pegs, Round Holes And Purchase Orders Even something simple like a purchase order can have a huge impact on a company like Gap. A purchase order is a rich business object that contains various pieces of information (item type, color, price, vendor information, shipping information, etc.). A purchase order at Gap can be an order to a vendor to produce a certain article of clothing. The critical thing is that the business thinks about the order as a single entity, while Gap's RDBMS broke up the purchase order into a variety of rows, columns and tables, joined together. Not very intuitive. While this may seem like a small thing, as Murray points out, the RDBMS "forced [developers] to shift away from the business concept-- what is a purchase order and what are the business rules and capabilities around it-- and shift gears into 'How do I make this technology work for me and help me solve a business problem?' [mode of thinking]. And that destroys flow." Developers may be more technical than the rest of us, Gap wanted its developers helping to build its business , not merely its technology. Murray continues: "We don't want the developer having to work with the impedance mismatch between the business concept that they're trying to solve for and the technology they're using to solve it." Enabling Supply Chain Agility By Improving Developer Productivity As such, Gap realized it needed to evolve how it manages inventory and its vendors. It turned to MongoDB because it was able to easily make sense of data that comes in different shapes, which it needed to store quickly and transparently in Gap's database. MongoDB, in short, helped Gap become much more agile and, hence, far more competitive. One way Gap managed this was by moving from a monolithic application architecture to a microservices-based approach. The traditional model for building applications has typically been as large monoliths. In this case, that meant the PO system was one, big code base that handled everything related to a PO, whether that was handling demand from the planning systems and creating those purchase orders or simply handling how the purchase orders actually integrate to other systems and get down to the vendors. All of those things are actually fairly independent of each other, but the code base to manage it was monstrously big and monolithic. Instead Murray and team introduced the concept of the microservice, a service dedicated to one business capability. For example, a microservice could handle communicating out to the vendors by EDI or whatever technology that a new purchase order has been registered. It turns out that MongoDB is perfect for such microservices because it's so simple and lightweight, Murray notes. Gap uses MongoDB to power these single service and to connect them together. Each of these services lines up with a business function. Developers can work on separate microservices without bumping into or waiting on each other, as is common in a monolithic architecture. This enables them to be far more productive; to work much faster. MongoDB As An "Extreme Enabler Of Agile Development" In this and other ways, Murray lauds MongoDB as “an extreme enabler of agile development”, or iterative development. Waxing rhapsodic, Murray continues: MongoDB allow[s our developers] to essentially forget about the storage layer that's underneath and just get work done. As the business evolves, the concept of a purchase order as an aggregate concept will also change as they add fields to it. MongoDB gets out of the way. [Developers] drop a collection, start up new code over that database, and MongoDB accepts whatever they throw at it. Again, developers don't have to stop, break the context of solving the business problem, and get back to what they're doing. They simply get to focus on the business problem. And so as an agile enabler, as an enabler of developers to work fast and smart, MongoDB do is extremely useful. As just one example, Gap was able to develop this new MongoDB-based purchase order system in just 75 days, a record for the company. In true agile fashion, MongoDB enables Gap to continue to iterate on the system. Five months in, the business wanted to track in a dashboard style the life of a purchase order. With MongoDB, that business requirement turned out to almost require no development effort. Murray and team were able to add new types of purchase orders and have them easily coexist with old purchase orders in the same collection and keep moving. Not in months. Or weeks. But rather each day the development team was able to show the business what that feature might look like because of MongoDB's flexibility. All of which makes Murray and his team at Gap so happy to work with MongoDB. "Software is ultimately about people," he insists, and giving developers software like MongoDB that they love to use makes them happy and productive. And agile. **Sign up to receive videos and content from MongoDB World.** MktoForms2.loadForm("//", "017-HGS-593", 1151);

July 30, 2014

Visualizing Mobile Broadband with MongoDB

The FCC has a mandate to collect and share information on mobile broadband quality. Traditionally, that has meant collecting data and then issuing a report. Before the report is completed – a process that involves drafting, writing, rewriting, and getting the right approvals – the public generally has no visibility into the data. MongoDB is helping change that. The FCC Speed Test App (available for iPhone and Android ) measures network quality metrics, including upload and download speed, latency, and packet loss. Currently, users can test their own networks and view an archive of their test results. Soon, the Visualizing Mobile Broadband project will allow consumers to see aggregate test results overlaid on maps, as soon as they become available. The visualization application is built on MongoDB using Node.js, and employs Mapbox for mapping. The results of the Speed Test app are collected, imported into MongoDB, cleaned and validated, and then aggregated by geography, carrier, and time. Eric Spry, the Acting Geographic Information Office at the FCC, said “The data tended to be a little messy for our SQL schema, and we were constantly having to redefine fields and having to account for new edge cases in the data … MongoDB allowed us a great deal of flexibility in the data we could accept.” Using MongoDB, the FCC is able to store the results as they come in, and perform data validation afterwards. Not only does MongoDB serve as the container for the speed test data, but it also provides the spatial operators for aggregating and analyzing test results based on location. The FCC also chose MongoDB for its ability to scale, as their test results grow from millions to tens of millions per month. The application will allow consumers to see mobile broadband data and use that information to make more informed carrier choices. In addition, an API and release of the source code will enable others to build their own applications using the mobile network information as it becomes available. Of course, being a government agency, the FCC faces its own set of challenges. “The MongoDB team understands government procurement and our unique security issues,” said Spry. “Their knowledge of our requirements meant that standing up a MongoDB server went very smoothly.” For more details, see the full recording of Eric Spry's talk at MongoDB World, available now. The FCC will launch the application in August 2014. To see all MongoDB World presentations, visit the [MongoDB World Presentations]( page.

July 21, 2014

MongoDB Takes Center Stage at Ticketmaster

The world leader in selling tickets, Ticketmaster spent more than a decade developing apps extensively on Oracle and MySQL. The ticketing giant recently added MongoDB to the mix to complement existing database technologies with increased flexibility and performance, and decreased costs and time-to-market. “Database performance and scale are a huge part of what we do, ensuring we can sell tickets 24/7,” said Ed Presz , VP of Database Services at Live Nation/Ticketmaster. MongoDB currently plays a key role in TM+ , Ticketmaster’s newest app covering the secondary, resale market. It will also be used in the future for a new app called Concerts, including venue view, B2B session recovery and client reports. “We’re moving to an agile devops environment and our developers love MongoDB’s ease of deployment and flexibility,” said Presz. Presz also highly recommends MongoDB’s MMS and has also been pleased with MongoDB’s Enterprise Support. “We were new to MongoDB, about to go into production and we were a bit scared,” he said. “One of the things I was pushing hard for was enterprise support, so we’d have someone we could call. MongoDB’s enterprise support has been fantastic.” Ticketmaster is a good example of how an organization can benefit both developmentally and operationally from MongoDB. To see all MongoDB World presentations, visit the [MongoDB World Presentations]( page.

June 30, 2014

MongoDB: A Single Platform for All Financial Data at AHL

AHL , a part of Man Group plc, is a quantitative investment manager based in London and Hong Kong, with over $11.3 billion in assets under management. The company relies on technology like MongoDB to be more agile and therefore gain an edge in the systematic trading space. With MongoDB, AHL can better support its quantitative researchers – or “quants” – to research, construct and deploy new trading models in order to understand how markets behave. Importantly, AHL didn't embrace MongoDB piecemeal. Once AHL determined that MongoDB could significantly improve its operations, the financial services firm embraced MongoDB across the firm for an array of applications. AHL replaced a range of traditional technologies like relational databases with a single platform built on MongoDB for every type and frequency of financial market data, and for every level of data SLA, including: Low Frequency Data – MongoDB was 100x faster in retrieving data and also delivered consistent retrieval times. Not only is this more efficient for cluster computation, but it also leads to a more fluid experience for quants, with data ready for them to easily interact with, run analytics on and plot. MongoDB also delivered cost savings by replacing a proprietary parallel file system with commodity SSDs. Multi-user, Versioned, Interactive Graph-based Computation – This includes 1 terabyte of data representing 10,000 stocks and 20 years of time-series data, so as to help quants come up with trading signals for stock equities. While not a huge quantity of data, MongoDB reduced time to recompute trading models from hours to minutes, accelerated quants’ ability for interactive research, and enabled read/write performance of 600MB of data in less than 1 second. Tick Data – Used to capture all market activity, such as price changes for a security, up to 150,000 per second and including 30 terabytes of historic data. MongoDB quickly scaled to 250 million ticks per second, a 25X improvement in tick throughput (with just two commodity machines!) that enabled quants to fit models 25X as fast. AHL also cut disk storage down to a mere 40% of their previous solution, and realized a 40X cost savings. The result? According to Gary Collier, AHL’s Technology Manager: “Happy developers. Happy accountants.”

June 27, 2014

US Department Of Veterans Affairs Goes From Wire Frame To Production App In Weeks, Not Months Or Years, With MongoDB

Would it surprise you that one of the biggest open-source software shops in the world, in fact one of the biggest NoSQL shops in the world, resides in the U.S. government? The Department of Veterans Affairs has more than 20 million primary customers and a $3.4B annual IT budget with 400,000 users and over 5,000 applications. The VA turned to MongoDB to unlock enterprise services with a schema agnostic enterprise CRUD (eCRUD) service. Previously, the VA was paying millions of dollars to lock away data in relational databases and millions more to get it back out. “It just didn’t make sense,” said Joe Paiva, Chief Technology Strategist at the U.S. Department of Veteran Affairs. “We realized early on we could never build all the apps that people want. We wanted to go from wire frame to app much, much faster.” In order to get there, the VA used MongoDB as one logical, federated data store for all of its different types of data. Now, people can freely code as long as they know how to do an AJAX web services call. “You can say you’re agile, that you’re incremental, but when you need change, you need all the change!" said Paiva. With MongoDB, they achieved just that. The VA had the first version of the service up and running in weeks. “It was that fast,” said Paiva. Through this effort, the VA has been able to provide efficiency and enhanced information agility. Plus, it has increased security by consolidating data under standardized enterprise controls… all in the name of keeping costs low while better serving a greater number of veterans. To see all MongoDB World presentations, visit the [MongoDB World Presentations]( page.

June 27, 2014

Best Of Both Worlds: Genentech Accelerates Drug Research With MongoDB & Oracle

“Every day we can reduce the time it takes to introduce a new drug can have a big difference on our patients,” said Doug Garrett, Software Engineer at Genentech. Genentech Research and Early Development (gRED) develops drugs for significant unmet medical needs. A critical component of this effort is the ability to provide investigators with new genetic strains of animals so as to understand the cause of diseases and to test new drugs. As genetic testing has both increased and become more complex, Genentech has focused on redeveloping the Genetic Analysis Lab system to reduce the time needed to introduce new lab instruments. MongoDB is at the heart of this initiative, which captures the variety of data generated by genetic tests and integrates it with Genentech's existing Oracle RDBMS environment. MongoDB’s flexible schema and ability to easily integrate with existing Oracle RDBMS has helped Genentech to reduce development from months to weeks or even days, significantly accelerating drug research. “Every day we can reduce the time it takes to introduce a new drug can have a big difference on our patients,” said Doug Garrett, Software Engineer at Genentech. Previously, the Genentech team needed to change the schema every time they introduced a new lab instrument, which held up research by three to six months, and sometimes even longer. At the same time, the database was becoming more difficult to support and maintain. The MongoDB redesign delivered immediate results. In just one example, adding a new genetic test instrument (a new loader) had zero impact on the database schema and allowed Genentech to continue with research after just three weeks, instead of the standard three to six-month delay. MongoDB also makes it possible for Genentech to load more data than in the past, which fits in well with the “collect now, analyze later” model, something he noted MongoDB co-founder Dwight Merriman has often suggested . Said Garrett: “Even if we don’t know if we need the data, the cost is practically zero and we can do it without any programming changes, so why not collect as much as we can?” To see all MongoDB World presentations, visit the [MongoDB World Presentations]( page.

June 26, 2014

How Hudl Uses MongoDB To Scale Its Video Analysis Platform

Hudl’s video analysis platform helps coaches win by delivering secure access to video analysis tools from any computer or mobile device. For those who follow Division 1 college sports in the United States, you’ll be interested in knowing that the Hudl platform stores video for 99% of DI schools’ top recruits. As Hudl has grown, it has outgrown some of its infrastructure. For example, when Hudl hit a limit on EC2 (where SQL wouldn’t scale on a single instance), the growing company needed to recruit a new database. After evaluating different options, Hudl chose MongoDB . According to Hudl CTO Brian Kaiser: “MongoDB changed devops from a necessary evil to something that is transforming the company, helping us move quickly. It makes innovation easy and is universally recognized at our company because it’s been so impactful on our growth.” The numbers speak for themselves. Today, MongoDB stores 650 million plays (atomic unit such as point for volleyball, play for football) and associated metadata, which gets 1 billion video views per month. The MongoDB-based platform has streamed 18 petabytes of data in 2014. During peak football season, the platform ingests 25 hours of raw video per minute; that’s one quarter of what YouTube ingests. Big numbers for a small company! With MongoDB, Hudl has achieved steady, consistent growth and enabled Hudl to double in size. “MongoDB really facilitates rapid iterations, so the dev team can try things out and make mistakes – it’s magical for that,” said Kaiser. “MongoDB has led ops to promote squad growth and really empowered our company, and that’s something we’re proud of.” To see all MongoDB World presentations, visit the [MongoDB World Presentations]( page.

June 24, 2014

MongoDB powers Mappy Health's tweet-based disease tracking

Twitter has come a long way from being the place to read what your friends ate for dinner last night (though it still has that). Now it’s also a place where researchers can track the ebb and flow of diseases, and take appropriate action. In early 2012, the U.S. Department of Health and Human Services challenged developers to design applications that use the free Twitter API to track health trends in real time. With $21,000 in prize money at stake, Charles Boicey , Chief Innovation Officer of Social Health Insights, and team got started on the Trending Now Challenge , and ultimately won with its MongoDB-powered solution, Mappy Health . Not bad, especially since the small team had only three weeks to put together a solution. Choosing a Database MongoDB was critical to getting the application done well, and on time, as Boicey tells it, MongoDB is just a wonderful environment in which to work. What used to take weeks with relational database technology is a matter of days or hours with MongoDB. Fortunately, Boicey had a running start. Having used MongoDB previously in a healthcare environment, and seeing how well it had ingested health information exchange data in an XML format, Boicey felt sure MongoDB could manage incoming Twitter data. Plus, Mappy Health needed MongoDB’s geospatial capabilities so as to be able to track diseases by location. Finally, while the team evaluated other NoSQL options, “MongoDB was the easiest to stand up” and is “extremely fast.” To make the development process even more efficient, Mappy Health runs the service on Amazon EC2. Processing the Data While UCI has a Hadoop ecosystem Mappy Health could have used, the team found that for processing real-time algorithms and MapReduce jobs, they run much faster on MongoDB, and so runs MapReduce within MongoDB, yielding insights like this: As Boicey notes, Writing MapReduce jobs in Javascript has been fairly simple and allows us to cache collections/hashes of data frequently displayed on the site easily using a Memcached middleman between the MongoDB server and the Heroku-served front-end web app. This jibes well with Mappy Health’s overall rationale for choosing MongoDB: MongoDB doesn’t require a lot of work upfront (e.g., schema design - “doing the same thing in a relational database would require a lot of advance planning and then ongoing maintenance work like updating tables) and MongoDB works really well and scales beautifully Since winning the Trending Now Challenge, Mappy Health has been working with a number of other organizations. We look forward to even bigger and better things from this team. Imagine what they could do if given a whole four weeks to build an application! Tagged with: Mappy Health, case study, disease tracking, US Department of Health and Human Services, flexibility, ease of use, Amazon, EC2, dynamic schema

March 18, 2013

Pearson / OpenClass Uses MongoDB for Social Learning Platform

We recently spoke with Brian Carpio of Pearson about OpenClass , a new project from Pearson with deep Google integration. What is OpenClass? OpenClass is a dynamic, scalable, fully cloud-based learning environment that goes beyond the LMS. OpenClass stimulates social learning and the exchange of content, coursework, and ideas â€â€ù all from one integrated platform. OpenClass has all the LMS functionality needed to manage courses, but that's just the beginning. Why did you decide to adopt MongoDB for OpenClass? OpenClass leverages MongoDB as one of its primary databases because it offers serious scalability and improved productivity for our developers. With MongoDB, our developers can start working on applications immediately, rather than slogging through the upfront planning and DBA time that relational database systems require. Also, given that a big part of the OpenClass story will be how we integrate with both public and private cloud technologies, MongoDB support for scale-out, commodity hardware is a better fit than traditional scale-up relational database systems that generally must run on big iron hardware. Can you tell us about how you’ve deployed MongoDB? Currently we deploy MongoDB in our world-class datacenters and in Amazon's EC2 cloud environment with future plans to go to a private cloud technologies such as OpenStack. We leverage both Puppet and Fabric for deployment automation and rolling upgrades. We also leverage Zabbix and the mikoomi plugin for monitoring our MongoDB production servers. Currently each OpenClass feature / application leverages its own MongoDB replica set, and we expect to need MongoDB’s sharding features given the expected growth trajectory for OpenClass. What recommendations would you give to other operations teams deploying MongoDB for the first time? Automate everything! Also, work closely with your development teams as they begin to design an application that leverages MongoDB, which is good advice for any new application that will be rolled into production. I would also say to look at Zabbix as it has some amazing features related to monitoring MongoDB in a single replica set or in a sharded configuration that can help you easily identify bottlenecks and identify when it’s time to scale out your MongoDB deployment. Finally, I would suggest subscribing to the #mongodb irc channel, as well as the MongoDB Google Group , and don't be afraid to ask questions. I personally ask a lot of questions in the MongoDB Google Group and receive great answers not only from 10gen CTO Eliot Horowitz , although he does seem to answer a lot of my questions, but from a many other 10gen folks. What is in store for the future with MongoDB at Pearson? Our MongoDB footprint is only going to continue to grow. More and more development teams are playing with MongoDB as the foundation of their new application or OpenClass feature. We are working on migrating functionality out of both Oracle and Microsoft SQL Server to MongoDB where it makes sense to relieve the current stress on those incumbent database technologies. Thanks to Brian for telling us about OpenClass! Brian also blogs at — be sure to check out his posts on MongoDB here and here and here and here and here . Tagged with: case study, Pearson, OpenClass, scalability, flexibility, ease of use

February 28, 2013