June 17, 2014 This release is packed with fun and new UI items that should make MongoDB Monitoring and Backup customers happy. Such as:
- Brand new cluster page charts
- Upgraded Java driver to 2.6 version
- All agent logs can now be downloaded
- Add button to “Delete all Deactivated Hosts”
- Allow users to specify an extension when providing 2FA phone number
- And a new alert for repl lag on secondaries getting close to the oplog window on the primary.
Our MongoDB Backup Service also received some major tweaks this release. Backup restores can now read from secondaries, and there were further optimizations to improve backup processing and restore time, such as automatic clean up of orphaned heads and streaming oplog processing. The backup agent also received another update - MongoDB Backup Agent Version 184.108.40.206-1 contains support for new API which allows oplogs to be ingested before the entire payload has reached the MMS servers.
Our automation beta is also continuing with select customers. We’ve gotten a lot of feedback thus far, and are really looking forward to the official debut at MongoDB World on June 23rd. You can read more about that (and sign up to receive early access!) here.
Have an issue or a bug or a feature request? File a ticket in our feature request queue!
6 Rules of Thumb for MongoDB Schema Design: Part 3
By William Zola, Lead Technical Support Engineer at MongoDB This is our final stop in this tour of modeling One-to-N relationships in MongoDB. In the first post , I covered the three basic ways to model a One-to-N relationship. Last time , I covered some extensions to those basics: two-way referencing and denormalization. Denormalization allows you to avoid some application-level joins, at the expense of having more complex and expensive updates. Denormalizing one or more fields makes sense if those fields are read much more often than they are updated. Read part one and part two if you’ve missed them. Whoa! Look at All These Choices! So, to recap: You can embed, reference from the “one” side, or reference from the “N” side, or combine a pair of these techniques You can denormalize as many fields as you like into the “one” side or the “N” side Denormalization, in particular, gives you a lot of choices: if there are 8 candidates for denormalization in a relationship, there are 2 8 (1024) different ways to denormalize (including not denormalizing at all). Multiply that by the three different ways to do referencing, and you have over 3,000 different ways to model the relationship. Guess what? You now are stuck in the “paradox of choice” – because you have so many potential ways to model a “one-to-N” relationship, your choice on how to model it just got harder. Lots harder. Rules of Thumb: Your Guide Through the Rainbow Here are some “rules of thumb” to guide you through these indenumberable (but not infinite) choices One: favor embedding unless there is a compelling reason not to Two: needing to access an object on its own is a compelling reason not to embed it Three: Arrays should not grow without bound. If there are more than a couple of hundred documents on the “many” side, don’t embed them; if there are more than a few thousand documents on the “many” side, don’t use an array of ObjectID references. High-cardinality arrays are a compelling reason not to embed. Four: Don’t be afraid of application-level joins: if you index correctly and use the projection specifier (as shown in part 2) then application-level joins are barely more expensive than server-side joins in a relational database. Five: Consider the write/read ratio when denormalizing. A field that will mostly be read and only seldom updated is a good candidate for denormalization: if you denormalize a field that is updated frequently then the extra work of finding and updating all the instances is likely to overwhelm the savings that you get from denormalizing. Six: As always with MongoDB, how you model your data depends – entirely – on your particular application’s data access patterns. You want to structure your data to match the ways that your application queries and updates it. Your Guide To The Rainbow When modeling “One-to-N” relationships in MongoDB, you have a variety of choices, so you have to carefully think through the structure of your data. The main criteria you need to consider are: What is the cardinality of the relationship: is it “one-to-few”, “one-to-many”, or “one-to-squillions”? Do you need to access the object on the “N” side separately, or only in the context of the parent object? What is the ratio of updates to reads for a particular field? Your main choices for structuring the data are: For “one-to-few”, you can use an array of embedded documents For “one-to-many”, or on occasions when the “N” side must stand alone, you should use an array of references. You can also use a “parent-reference” on the “N” side if it optimizes your data access pattern. For “one-to-squillions”, you should use a “parent-reference” in the document storing the “N” side. Once you’ve decided on the overall structure of the data, then you can, if you choose, denormalize data across multiple documents, by either denormalizing data from the “One” side into the “N” side, or from the “N” side into the “One” side. You’d do this only for fields that are frequently read, get read much more often than they get updated, and where you don’t require strong consistency, since updating a denormalized value is slower, more expensive, and is not atomic. Productivity and Flexibility The upshot of all of this is that MongoDB gives you the ability to design your database schema to match the needs of your application. You can structure your data in MongoDB so that it adapts easily to change, and supports the queries and updates that you need to get the most out of your application.
Digital Transformation with MongoDB Atlas and Accenture Cloud First: Three Use Cases for Cloud Modernization
Choosing which providers to partner with when moving to the cloud is a critical first step in pivoting to meet this new field of opportunity. Together, MongoDB and Accenture deliver deep expertise, proven success, and the right balance of experience across industries. Here’s how—and where—we can help. Journey to the cloud: Three value-unlocking use cases typically prioritized by CIOs and CTOs Accenture and MongoDB have identified three common use case patterns across industries, each worth examining further: Building out APIs for modern application development Moving from monolith to microservice for modernization Offloading from legacy or mainframe systems These scenarios typically come to the forefront of the CIO/CTO agenda as they aim to accelerate their journey to the cloud and to release gridlock from old but valuable business applications. A common mistake is believing that the business case for this migration is entirely IT-driven. The true motivation must come from understanding that the legacy technology is stunting the ability for applications to keep up with the increasing pace of change in the business. Additionally, and especially for mainframe applications, the labor force segment that knows these business critical applications is rapidly approaching retirement, leaving a critical knowledge gap in supporting and extending these applications. Let’s take a deeper look. 1. Building out APIs for modern application development Innovative applications require businesses to react and adapt quickly. Development needs to be quick, the architecture must be loosely coupled, and the deployment model must have scalability built into its core. Technical challenges facing today’s applications include: Incoming digital requests can vary and follow trends, in an often quick-bursting and even disruptive manner Businesses need to be able to adapt on the fly – and, therefore, so does the data structure Innovation today requires data models that can be extended to meet future demands The weight of these requirements has been exacerbated as businesses become more and more structured for our on-demand world. Mobile applications at a bank, for example, need to be able to service customers in real-time and 24/7. And it’s not just banks—demand volatility is on the rise across industries and customer expectations continue to evolve. The secret sauce in these new applications is to build the data to fit the application – rather than dealing with the constraints of the relational data models of the previous generation applications. With the flexible data structures and ease-of-change with MongoDB, developers can build and adjust the data structures to meet the rapidly evolving needs of new applications, delivering the speed and agility needed to create disruptive change. Furthermore, you can scale out quickly with the horizontal scaling capabilities of the leading NoSQL database, while uniquely still having access to ACID database properties where needed. Finally, with MongoDB Atlas , the fully managed MongoDB solution, the needs for scale-on-demand and ready-to-use secure production infrastructure in Azure, AWS, and Google Cloud, peace of mind has never come easier when preparing for an industry-changing innovation. 2. Moving from monolith to microservice for modernization Not all innovation comes in a brand-new application. In order to innovate with old applications, sometimes they need to be extended in ways that were not contemplated when leading architectures centered around monolithic application configurations and deployments. Many of our clients experience challenges with their application portfolios because they have not kept pace with the rapid evolution of technology, driving one critical business problem: the inability to scale at pace. Digital Decoupling is an Accenture solution designed to extend the life of critical business applications while augmenting them with new functionality – often through the modern development techniques and architecture as described in section 1. Initially, this will mean that there is yet another technology pattern included in the footprint of the business application. However, the difference here is that the transformation from “legacy java” to modern architectures can be accelerated. Once you have experienced the simplicity of data that matches exactly what your application needs, it will be difficult to stop! Diagram A: Monolith to microservices transformation The diagram above (reference Diagram A ) illustrates the historical move to a more ideal data architecture from monolith to microservices. However, in the real world, companies often find completely moving over to a microservice layer to be too expensive or too much effort. That is why over the last 5 years, Accenture has developed its Digital Decoupling approach to solving this problem. More details can be found here . With Accenture’s Smart Data Mover data can be moved from the relational database into MongoDB and code can be substantially ported as-is while simplifying the data access layer, accelerating organizations in their onramp to rapid change and innovation. The size and shape of the new services can adjust from microservices for rapidly evolving functions to coarse-grained services that don’t have much change and don’t warrant further investment. MongoDB is designed to work well as the storage layer of a microservice or API architecture and can mitigate risk during the refactoring phase, so that businesses can meet demands that ebb, flow and ideally grow in traffic. 3. Offloading from legacy or mainframe systems As more and more companies migrate to the cloud, a common (and often painful) use case is the requirement to offload applications from mainframes and other legacy data stores. It is painful for many reasons, most predominantly because no one within the organization has the institutional knowledge required to maintain and operate the legacy application, and because many times the movement of the data from the system involves several complex technical hurdles to implementation. To solve this, Accenture’s Digital Decoupling approach extends to the mainframe as well. This approach within the mainframe offload scenario is located below (reference Diagram B ). Diagram B: Mainframe offload reference architecture with MongoDB Atlas: When transitioning from a legacy or mainframe system, there are several aspects to consider: How will the data and code be migrated? If moving to a more scalable data solution, how will code in triggers and procedures get ported? How will the new data platform solution deliver on required flexibility, scale (and not just for read-only copies of the data) in addition to resilience non-functional requirements? The last point is the pivot of our conversation. To gain a true advantage, most organizations will need to preserve traditional enterprise capabilities, but also meet the needs of a modern day and age where storage is no longer a concern. To help move the data over, Accenture Smart Data Mover can get the job done seamlessly. CDC tools and/or Kafka can also be leveraged for continuous update, if the preferred source of truth for some applications still needs to be the mainframe. Please check out this solutions guide for more information about best practices on offloading from the mainframe with MongoDB. Why MongoDB? MongoDB’s document-based, distributed database provides users with the versatility to build sophisticated applications that can respond and adapt to changing customer demands and market trends. As the leading choice for general-purpose databases, MongoDB reduces time spent on development cycles and empowers developers with flexible schema and the tools they need to innovate. Furthermore, MongoDB’s fully managed database-as-a-service option, MongoDB Atlas , is the only multi-cloud document database available in the market, and delivers the most advanced security and data distribution capabilities of any fully-managed service. MongoDB has also spent years building out a full modernization program to help customers and their architects with their journey to the cloud. This program includes training, tools, and best practices that have been co-developed with System Integrators, especially Accenture. This is why MongoDB is partnering heavily with Accenture in helping customers move to the cloud. Why Accenture Cloud First? Accenture Cloud First is a multi-service group of 70,000 cloud professionals across the globe that brings together the full power and breadth of Accenture’s industry and technology capabilities to help move organizations to the cloud with greater speed and achieve greater value, faster. The Cloud First team combines world-class cloud and cloud native engineering, learning and talent development expertise, deep experience in cloud change management, and cloud-ready operating models with a commitment to responsible business by design. Security, data privacy, responsible use of artificial intelligence, sustainability, and ethics and compliance are built into the fundamental changes Accenture helps companies achieve. How MongoDB and Accenture can help Our clients have taken a deep, insightful look at their application portfolios and uniformly decided that they need to accelerate their migrations to the cloud under the realization that their data center isn’t where they want their employees, i.e. their most precious resource, to spend most of their time. This realization has pushed Cloud Migration and Modernization efforts front and center for the C-Suite. Accenture has developed an end-to-end Value-Led Modernization methodology that analyzes each business case to deliver the most value possible for our clients. We built this methodology in order to be laser-focused on delivering the right outcome: increased value to the organization vs. just doing modernization for modernization’s sake. Core to our methodology is the belief that modernization initiatives should take a lean engineering approach to the work itself while simultaneously enabling new value streams for the business. To enable this dual reality that CIOs and CTOs find themselves in today, we architecturally focus on three tenants: Minimal invasive modernization efforts using our digital decoupling techniques Establishment of an enterprise, event-driven architecture as the core of communication Establishment of a modern data architecture to underpin our new value propositions while simultaneously supporting existing value cases. To remain competitive in today’s ever-changing marketplace, you need to be able to scale quickly and securely while enabling access to one of the business’ most important assets: its data. Together, Accenture and MongoDB have made several investments to date and continue to partner to bring the best to our clients. Accenture and MongoDB have launched another joint solution, our “Modernizer Tool,” to help customers modernize as they migrate to the cloud. The Modernizer tool is an asset that identifies relevant information and speeds up feeding and integration and migration process definition by standard techniques. The tool aims to mitigate data modeling and integration challenges by applying a metadata-driven approach which anticipates key risks. Looking Forward Bottom line? Your organization needs a modern database—one that can allow it the speed and agility to keep up with ever changing business needs. Further investment in modernization today isn’t an option, it’s a necessity to remain competitive, to realize your digital transformation goals and to provide your organization with the foundation necessary to innovate. Accenture and MongoDB will continue to partner in making investments in Cloud Enablement, Cloud Migration and Modernization that will enable you to realize your goals. We look forward to working with our customers on cloud modernization projects in the field. Find out more about the MongoDB & Accenture partnership here .