We’re excited to announce analytics node tiers for MongoDB Atlas. Analytics node tiers provide greater control and flexibility by allowing you to customize the exact infrastructure you need for your analytics workloads.
Analytics node tiers provide control and flexibility
Until now, analytics nodes in MongoDB’s Atlas clusters have used the same cluster tier as all other nodes. However, operational and analytical workloads can vary greatly in terms of resource requirements. Analytics node tiers allow you to enhance the performance of your analytics workloads by choosing the best tier size for your needs. This means you can choose an analytics node tier larger or smaller than the operational nodes in your cluster. This added level of customization ensures you achieve the performance required for both transactional and analytical queries — without the need to over- or under-provision your entire cluster for the sake of the analytical workload. Analytics node tiers are available in both Atlas and Atlas for Government.
Choose a higher or lower analytics node tier based on your analytics needs
Teams with large user bases using their BI dashboards may want to increase their analytics node tiers above that of their operational nodes. Choosing a higher tier can be useful when you have many users or require more memory to serve analytics needs. Scaling up the entire cluster tier would be costly, but scaling up just your analytics node tiers helps optimize the cost.
Teams with inconsistent needs may want to decrease their analytics node tier below that of their operational nodes. The ability to set a lower tier gives you flexibility and cost savings when you have fewer users or analytics are not your top priority.
With analytics node tiers, you get more discretion and control over how you manage your analytics workloads by choosing the appropriately sized tier for your analytics needs.
Video: Canva's Lessons From Scaling MongoDB Atlas to 10 Billion Documents Across 100 Nodes
Running complex, global, and mission-critical infrastructure at scale is difficult, and anyone who has done it for any length of time usually has a few gnarly lessons to share. At MongoDB World in June 2022, we were lucky enough to feature someone who had done just that. Michael Pearson , software engineering team lead at Canva , gave a talk titled “10 Billion Documents: How Canva Scaled MongoDB to 100 Nodes.” I’ve had the pleasure of working alongside Pearson and his team for almost a year now, and his presentation focused on some of the massive challenges (and lessons) they’ve faced over the last two years as they have scaled into tens of terabytes of data and tens of billions of documents. I’m writing this blog to give a few highlights, but I’d recommend everyone check the original talk in full: A tricky problem For the uninitiated, Canva is a visual communication platform that empowers its users to design anything and publish anywhere. Or, as Pearson explained in his talk, “Canva is a really simple way to create beautiful designs and presentations.” Canva’s mission is to empower the world to design, and more than 85 million people in over 190 countries use the platform every month. As you can imagine, this presents a huge data challenge — and opportunity. Canva holds more than 10 billion designs and receives up to 30,000 document requests per second. The success of the platform comes down to providing a fantastic user experience every time, and to do that they need to present their customers with the right data at the right time. “This could be a really tricky problem for a database platform, particularly for a business based in Sydney with many users on the other side of the world,” said Pearson. MongoDB Atlas supports the Document Service, which enables opening, creating, updating, or deleting any design on Canva. The Document Service is critical for every single user — if the Document Service is down, then Canva’s users can’t design. But before we get too far into things, we should probably start with why Canva started using MongoDB in the first place. Flexibility to support rapidly changing requirements Michael Pearson, software engineering team lead at Canva. “Canva was launched to the world back in 2013, when MongoDB was very new to the scene,” explains Pearson. “I'm not sure if there were any other databases that would have been up for the challenge.” From those earliest days, MongoDB's flexible document model was the perfect fit for Canva's increasingly complex designs and document types. “The flexibility that MongoDB gave us in those early days was instrumental to our success. As the Canva platform evolved, we were throwing new schema and new features at it. MongoDB would just handle it.” Its continued innovation and problem-solving means MongoDB remains as valuable to us today as it was in 2012. Michael Pearson, software engineering team lead at Canva At the same time, it was essential that Canva’s engineering team was focused on building Canva, rather than time spent managing the data platform. With that in mind, Canva chose to run MongoDB as a service. After trying out multiple options, they went with MongoLabs and, in 2019, following MongoDB's acquisition of MongoLabs, Canva migrated onto MongoDB Atlas , running on AWS, where they remain to this day. Ten years of relative bliss “Before 2021, we had a very hands-off approach to how we used MongoDB,” said Pearson. “MongoDB just handled it. We didn't have to think about it at all." That's incredible, right? Think about it — for nearly a decade the team barely had to think about their data layer and could spend their time working on new features and making the actual product better for its millions of users around the world. It's what every developer wants. Eventually, though, Canva’s own success created certain challenges around scaling. With the stratospheric increase in growth, the load on the Document Service also continued to increase. MongoDB’s ability to scale horizontally through the use of sharding was critical to overcoming initial scale challenges, something that traditional database management systems would have struggled to achieve, said Pearson. With MongoDB, sharding is distributed or partitioned across multiple machines — useful when no single machine can handle large workloads. In due course, though, some attributes of Canva’s workload presented a new challenge. Said Pearson: “We were unique in that we have one cluster with one collection with a million chunks. Our documents are fairly large, given our product has evolved over the years and we put more and more stuff into our documents.” Or, Canva does many updates to relatively large documents, and by mid-2021 the surge in traffic was causing issues. “Our peak traffic caused three main problems: inability to run the balancer, latency issues, and a disk usage pretty much at capacity,” Pearson explained. “A really ineffective cache caused a really high write load to our cluster. This was causing downstream failures." Pearson discussed some of the tactical solutions the company took. “Disabling the balancer immediately brought us back to service, but now we knew that there was something wrong with that cluster and we couldn’t operate without the balancer,” said Pearson. “We also noticed that the number of chunks in our cluster had skyrocketed, from around 400,000 to just over a million.” Getting to the root of the problem The professional services team at MongoDB discovered that “metadata updates were causing anywhere from a one-second to five-minute stalls in the cluster.” Going from 400,000 chunks to a million chunks, at the rate of a minute of each change, was not optimal. There were three things to address with that cluster: reduce the number of chunks, reduce that disk contention, and reduce the size of documents. “With regard to reducing the number of chunks, we just took any contiguous chunks on a shard and merged them unconditionally,” said Pearson. “This was tooling built in collaboration with MongoDB.” After three months of merging chunks, Canva saw massive improvements in its cluster’s performance. A failure rate during reboot of around 4% dwindled to less than 1% during maintenance operations. Further, to address latency spikes and full-disk capacity, the team formulated a six-step plan to move from network-based storage volumes to locally attached disks. This has proved a huge success. “We were able to run the balancer. Our large spikes in latency were pretty much all gone, and our disk usage was almost at zero,” Pearson said. He continued: "The key takeaway for me is that sharding is great, but it's never a silver bullet. I don't think we would have caught these issues so quickly without such a thorough incident review process and such a close working relationship with MongoDB." What was learned? After presenting all of that information, Pearson closed out the presentation with a few key lessons. For anyone interested in running infrastructure at a massive scale, they are simple and worth taking note of: Take advantage of the flexible document model to accelerate your pace of development. Ensure chunks are distributed uniformly across the cluster in a consistent size. Maintain a thorough incident review process and include your trusted partners (such as MongoDB). Reliability is an essential part of Canva’s engineering practice, and prolonged service disruptions were particularly upsetting not only for engineers but for Canva’s global users. Pearson is glad to report that Canva has seen a turnaround in the number of incidents impacting its Document Service. This has freed the document team to shift focus back to shipping features and ensuring every user has a flawless experience using Canva. Interested in joining Canva as it pursues its mission to empower the world to design? Canva is looking for a software engineer to join its Core Data team. Want to take advantage of the flexible document model to accelerate your pace of development? Learn more about MongoDB Atlas .
Choosing the Right Tool for the Job: Understanding the Analytics Spectrum
Data-driven organizations share a common desire to get more value out of the data they're generating. To maximize that value, many of them are asking the same or similar questions: How long does it take to get analytics and insights from our application data? What would be the business impact if we could make that process faster? What new experiences could we create by having analytics integrated directly within our customer-facing apps? How do our developers access the tools and APIs they need to build sophisticated analytics queries directly into their application code? How do we make sense of voluminous streams of time-series data? We believe the answer to these questions in today's digital economy is application-driven analytics. What is Application-Driven Analytics? Traditionally, there's been a separation at organizations between analytics that run the business and analytics that manage the business. They're built by different teams, they serve different audiences, and the data itself is replicated and stored in different systems. There are benefits to the traditional way of doing things and it's not going away. However, in today's digital economy, where the need to create competitive advantage and reduce costs and risk are paramount, organizations will continue to innovate upon the traditional model. Today, those needs manifest themselves in the demand for smarter applications that drive better customer experiences and surface insights to initiate intelligent actions automatically. This all happens within the flow of the application on live, operational data in real time. Alongside those applications, the business also wants faster insights so it can see what's happening, when it's happening. This is known as business visibility, and the goal of it is to increase efficiency by enabling faster decisions on fresher data. In-app analytics and real-time visibility are enabled by what we call application-driven analytics. Find out why the MongoDB Atlas developer data platform was recently named a Leader in Forrester Wave: Translytical Data Platforms, Q4 2022 You can find examples of application-driven analytics in multiple real-world industry use cases including: Hyper-personalization in retail Fraud prevention in financial services Preventative maintenance in manufacturing Single subscriber view in telecommunications Fitness tracking in healthcare A/B testing in gaming Where Application-Driven Analytics fits in the Analytics Ecosystem Application-driven analytics complements existing analytics processes where data is moved out of operational systems into centralized data warehouses and data lakes. In no way does it replace them. However, a broader spectrum of capabilities are now required to meet more demanding business requirements. Contrasting the two approaches, application-driven analytics is designed to continuously query data in your operational systems. The freshest data comes in from the application serving many concurrent users at very low latency. It involves working on much smaller subsets of data compared to centralized analytics systems. Application-driven analytics is typically working with hundreds to possibly a few thousand records at a time. And it's running less complex queries against that data. At the other end of the spectrum is centralized analytics. These systems are running much more complex queries across massive data sets — hundreds of thousands or maybe millions of records, and maybe at petabyte scale — that have been ingested from many different operational data sources across the organization. Table 1 below identifies the required capabilities across the spectrum of different classes of analytics. These are designed to help MongoDB’s customers match appropriate technologies and skill sets to each business use case they are building for. By mapping required capabilities to use cases, you can see how these different classes of analytics serve different purposes. If, for example, we're dealing with recommendations in an e-commerce platform, the centralized data warehouse or data lake will regularly analyze vast troves of first- and third-party customer data. This analysis is then blended with available inventory to create a set of potential customer offers. These offers are then loaded back into operational systems where application-driven analytics is used to decide which offers are most relevant to the customer based on a set of real-time criteria, such as actual stock availability and which items a shopper might already have in their basket. This real-time decision-making is important because you wouldn't want to serve an offer on a product that can no longer be fulfilled or on an item a customer has already decided to buy. This example demonstrates why it is essential to choose the right tool for the job. Specifically, in order to build a portfolio of potential offers, the centralized data warehouse or data lake is an ideal fit. Such technologies can process hundreds of TBs of customer records and order data in a single query. The same technologies, however, are completely inappropriate when it comes to serving those offers to customers in real time. Centralized analytics systems are not designed to serve thousands of concurrent user sessions. Nor can they access real-time inventory or basket data in order to make low latency decisions in milliseconds. Instead, for these scenarios, application-driven analytics served from an operational system is the right technology fit. As we can see, application-driven analytics is complementary to traditional centralized analytics, and in no way competitive to it. The benefits to organizations of using these complementary classes of analytics include: Maximizing competitive advantage through smarter and more intelligent applications Out-innovating and differentiating in the market Improving customer experience and loyalty Reducing cost by improving business visibility and efficiency Through its design, MongoDB Atlas unifies the essential data services needed to deliver on application-driven analytics. It gives developers the tools, tech, and skills they need to infuse analytics into their apps. At the same time, Atlas provides business analysts, data scientists, and data engineers direct access to live data using their regular tools without impacting the app. For more information about how to implement app-driven analytics and how the MongoDB developer data platform gives you the tools needed to succeed, download our white paper, Application-Driven Analytics: Defining the Next Wave of Modern Apps .