After nearly 100 years as the largest U.S. based business media brand, Forbes has established itself as a technology leader in the news industry. To compete in a new mobile environment, Forbes designed a next-generation mobile application to better engage users with their stories. They turned to MongoDB to create a new infrastructure for engaging and dynamic content.
Steven Bond, the group director for the Forbes.com Software Development Team, chose MongoDB for its intuitive web interface, ease of use, and low cost. Says Bond of his experience with MongoDB, “it just works.”
MongoDB made it possible for Forbes.com to store all of its data in a single database. This database contains information on nearly one million articles from thousands of global contributors and more than one hundred twenty thousand users, companies and place list entries. With MongoDB, Forbes is able to aggregate its data, connect it to its mobile and web applications, and integrate partner feeds from a centralized location, creating a rich user experience.
“The beauty of MongoDB is that we can constantly evolve without reengineering our entire approach,” says Bond. In his next project, Bond aims to use social media statistics to predict where users will consume content in the future and the kinds of content that will drive traffic. With MongoDB, Bond will help Forbes change how news is consumed and understood.
Using Big Data for Humanitarian Crisis Mapping
In the wake of natural disasters like Typhoon Haiyan, which brought widespread destruction to the Philippines several weeks ago, data management tools have become a critical component of the post-disaster landscape. Aid groups are monitoring tweets and instant messages where the infrastructure exists to support them, while tracking local news reports on the ground to find the areas suffering the greatest damage, directing resources to those most in need. Sourcing data can significantly improve the efforts of aid initiatives after a disaster. Big data for development, or data philanthropy, streamlines crisis management and prevention by using data processing tools to anticipate and respond to humanitarian emergencies. Initiatives like the UN Global Pulse team are using data to find the “digital smoke signals of distress” that can appear months before showing up on official reports. Real-time data monitoring using social networks, cell phones, blogs, and online commerce platforms can alert the team to indicators of social distress or natural disaster. And with the capacity to recognize these trends comes the ability to prepare the right aid or prevention plan that could save lives. What Big Data Can Do Big data can create a clear picture of a disaster’s regional effects. A program called Ushahidi sourced eyewitness reports (in person and through social media) of the 2010 earthquake in Haiti. The reports’ data became a live crisis map, showing where victims lay buried under collapsed buildings and where aid was most needed. After Typhoon Bopha in the Philippines last year, the Digital Humanitarian Initiative used over 20,000 social media messages to create a map of the storm’s impact and determine where to send aid first. Some organizations believe data for development can soothe social discontent. CNN reported that the U.S. State Department has analyzed data to try and prevent conflict from starting or escalating. Its Conflict and Stabilization Operations office analyzes behavioral patterns and semantic trends in social media to anticipate threats to peace while designing strategies to thwart potential outbreaks of violence. Partnerships For Philanthropy As the data philanthropy movement grows, the tech industry will be observing which companies and corporations are the first to join this global project. Twitter, Facebook, or Instagram might help us move towards a future where disease or disaster can be instantly monitored and possibly prevented, or where the spread of poverty can be stopped in its tracks. The success of these new ventures will not only depend on the determination of the people who work on them. Small humanitarian initiatives will need to develop partnerships with the larger corporations that control telecommunications and census data. Without access to big data or the proper processing tools, data philanthropy groups will not be able to keep up with the demands of crises happening in real time. Going Forward The United Nations Office for the Coordination of Humanitarian Affairs released a report this past June on the importance of big data and humanitarianism. Finding ways to improve humanitarian aid services with data is one of the great challenges and opportunities of our age. But accessing data is not necessarily straightforward. Negotiating with data providers can be difficult and privacy concerns could make corporations unwilling to participate. And while big data processing can be used to improve lives, it should augment existing data gathering methods, not replace them. MongoDB has helped several organizations use data mining to augment public service . The city of Chicago used MongoDB to design WindyGrid , a geographic information system providing a unified view of the city’s operations across a map. Including real-time data like 911 and 311 service calls, critical information is geospatially enabled and tracked to help the Chicago’s Emergency Management and Communications Office handle events or crises across the city. To explore the frontiers of physics, CERN built a Data Aggregation System (DAS) on MongoDB to help physicists search for and aggregate information across complex data landscapes. The data and metadata CERN handles are constantly evolving, but the DAS allows researchers to find information with text based queries, aggregating the results from distributed providers while preserving integrity and security. While these companies haven’t used data mining directly for humanitarian aid, mining data with MongoDB can easily be adapted to philanthropic service. Data philanthropy has the potential to influence humanitarian efforts and change how we understand the scope of big data. As these aid organizations grow in influence, it will be interesting to see how the industry shifts to make room for this new use of data.
4 Common Misperceptions about MongoDB
One year ago, in the middle of the pandemic, Dev Ittycheria, the CEO of MongoDB, brought me on as Chief Technology Officer. Frankly, I thought I knew everything about databases and MongoDB. After all, I’d been in the database business for 32 years already. I’d been on MongoDB’s Board of Directors and used the products extensively. And of course I’d done my due diligence, met the leadership team, and analyzed earnings reports and product roadmaps. Even with all that knowledge, this past year as MongoDB’s CTO has taught me that many of my preconceived notions were just plain wrong. This made me wonder how many other people might also have the wrong impression about this company. And this blog is my attempt to set those perceptions straight by sharing my four major revelations of the last year. My first revelation is that MongoDB is not trying to become this generation’s relational database. For years I assumed that MongoDB basically wanted to be a better, more modern version of Oracle when it grew up. In other words, compete with the huge footprint of Oracle and other commercial RDBMSs that have been the industry archetype for so long. I was way off. The whole point of MongoDB is to leave all those forms of archaic, legacy database technology in the historical dust. This was never supposed to be an evolution, but instead a revolution. Our founders not only envisioned the world's fastest and most scalable persistent store, but also one that would be programmed and operated differently. The combination of embedded documents and structures combined with automatic high availability and almost-infinite distribution capability all add up to a fundamentally different way of working with data, building applications, and running those applications in production. Oracle and (SQL*Server, etc) still hang their hats on E.F. Codd’s 51-year old vision of rows and columns. To obtain high availability and distribution of data, you need add ons, options packages, bailing wire and duct tape. And you need a lot of database administrators. Not cheap. Even after all that, you’re still trailing the technological edge. This is how wrong I was. Our durable competitive advantages over these legacy data stores make competing with those products almost irrelevant. We instead focus on the modern needs of modern developers building modern applications. These developers need to create their own competitive advantage through language-native development, reliable deployments to production, and lightning fast iteration. And the world is noticing; just check out the falling slope of Oracle and SQL*Server and the rising slope of MongoDB on the db-engines website. Which brings me to my second revelation: MongoDB was built for developers, by developers. I always knew that MongoDB was exceedingly fast and easy to program against. One time while I was bored in a meeting (yes, it happens here as well!), I built an Atlas database, loaded it with 350MB of data, downloaded and learned our Compass data discovery tool, built-in analytics aggregation pipelines, and our Charts package, and embedded live charts in a web page. This took me all of 19 minutes, end to end. To build something like that for engineers , it just has to be built by engineers , ones that are free to focus on all the rough edges that creep into products as features are added. I was first exposed to software planning and management over 40 years ago, and my LinkedIn profile shows a pretty diverse tour around the industry. Now, one year in, I can emphatically state that engineering and product at MongoDB are both different and better than any company I’ve ever had the privilege to work at. Our executive leadership gives engineering and product broad brushstokes of goals and desired outcomes, and then we work together to come up with detailed roadmaps, updated quarterly, that meet those goals in the way we think best, with no micromanagement. And we’re not afraid of 3-5 year projects, either. For example, multi-cloud was more than three years in the making. Also unlike any other company I’ve been at, we embrace the creation and re-payment of tech debt, rather than sweeping it under the rug. We do this through giving our product and engineering teams huge amounts of context, delivered with candor and openness. And one more essential thing; we have an empowered program management team that improves processes (including killing them) as fast as we create them. In short, we paint the targets for our teams and let them decide how and when to shoot. They even design the arrows and bows. It’s true bottoms-up engineering. Our engineers feel valued and understood. And that, in turn, empowers them to develop features that make our customers feel valued and understood, like a unified query language, or real-time analytics and charting directly in the console, or multi-region/multi-cloud clusters where all the networking cruft is taken care of for you. And this brings me to my third revelation: MongoDB is built for even the most demanding mission critical applications. Fast? Yes. Easy? Of course. But mission-critical? That’s not how I saw MongoDB when I used Version 2 for a massive student data project 10 years ago. While it was the only possible datastore we could have chosen for the amount of data and the speed of ingestion and processing needed, it was pretty hard to set up and use in a 24 x 365 environment. MongoDB had gotten ahead of itself in the early 2010’s. There was a gap between our capabilities and the expectations of the market. And it was painful. Other databases had had more than 30 years to solidify their systems and operations. We’d had five. But with Version 3 we added a new storage engine, full ACID transactions, and search. We built on it with Version 4. And then again with Version 5, released this week at our .Live conference. I knew about all this progress intellectually of course when I joined, but not viscerally. I came to realize that the security, durability, availability, scalability, and operability our platform offers (of course in addition to all the features that developers love too) was ideal for architecting fast-moving enterprise applications. And I found the proof in our customer list. It reads like a Who’s Who of major global banks, retailers, and telecommunications companies, running core systems like payments, IoT applications, content management, and real-time analytics. They use our database, data lake, analytics, search, and mobile products across their entire businesses, in every major cloud, on-premises, and on their laptops. And that leads me to my fourth and final revelation. MongoDB is no longer just a database. Of course, the database is still the core. But MongoDB now provides an enterprise-class, mission-critical application data platform. A cohesive, integrated suite of offerings capable of managing modern data requirements across even the most sprawling digital estates, and scaling to meet the level of any company’s ambition, without sacrificing speed or security. Since the day I was first introduced to MongoDB’s products, I’ve had tremendous respect and admiration for the teams and their work. After all, I’m a developer, first and foremost. And it always felt like they “got” me. But had I known then what I know now, I would have jumped on this train a long time ago. In fact, I might have camped out on their doorstep with my resume in hand. And who knows? Maybe a bunch of people reading this will do just that, and have their own revelations about how fulfilling and exciting it can be to be at a great company, with a great culture, producing great products. I’ll write another letter a year from now, and let you know how it’s going then. In the meantime, please reach out to me here, or at @MarkLovesTech .