We’re excited to announce a new feature for Monitoring in both Cloud Manager and Atlas: The Query Targeting Chart. This chart tracks two variables, the first is “scanned/returned” and the second is “scanned objects/returned”.
“Scanned/returned” refers to the ratio between the number of index items scanned and the number of documents returned by queries. If this value is 1.0, then your query scanned exactly as many index items as documents it returned – it’s an efficient query. This is available for MongoDB 2.4 and newer.
“Scanned objects/returned” is similar, except it’s about the number of documents scanned versus the number returned. A large number is a sign that you may need an index on the fields you are querying on. This metric is available for MongoDB 2.6 and newer.
For a little more understanding of this graph, let’s talk about a collection with 1000 documents in it. We then issue a query without an index (so it is a collection scan). Scanned objects/returned for this query could be as bad as 1000, but the average value would be 500. Now, let’s put an index on that same query, return one document and we only have scanned one document. This means that scanned/returned is 1, and scanned objects/returned is also 1. Finally, let’s say you do a covered query, in this case the scanned/returned is 1, but the scanned objects is 0, because the index has all the data you requested, so you didn’t need to query any objects!
This feature is available for all Cloud Manager and Atlas deployments. We believe this new chart will help you refine your queries and indexes to get the best performance out of your MongoDB deployment. However, if you need more help, the Visual Profiler as part of Cloud Manager Premium can help you identify slow queries and suggest indexes as well. Contact your Account Executive for more information about MongoDB subscriptions with access to Cloud Manager Premium.
Peter C. Gravelle is a Technical Account Manager at MongoDB, Inc. He can be found via Atlas’ chat option as well as in tickets. He can also be found in New York City.
Building Applications with MongoDB's Pluggable Storage Engines: Part 1
This is the first in a two post series about MongoDB’s pluggable storage engines. This post discusses characteristics of MongoDB’s storage engines. **Introduction** With users building increasingly complex data-driven apps, there is no longer a "one size fits all" database storage technology capable of powering every type of application built for the enterprise. Modern applications need to support a variety of workloads with different access patterns and price/performance profiles – from low latency, in-memory read and write applications, to real time analytics, to highly compressed "active" archives. Through the use of pluggable storage engines, MongoDB can be extended with new capabilities, and configured for optimal use of specific hardware architectures. This approach significantly reduces developer and operational complexity compared to running multiple database technologies. Storage engines can be mixed in the same replica set or sharded cluster. Users can also leverage the same MongoDB query language, data model, scaling, security and operational tooling across different applications, each powered by different pluggable MongoDB storage engines. **Figure 1:** Mix and match storage engines within a single MongoDB replica set MongoDB 3.2 ships with four supported storage engines that can be optimized for specific workloads: The default WiredTiger storage engine. For most applications, WiredTiger's granular concurrency control and native compression will provide the best all-around performance and storage efficiency. The Encrypted storage engine, protecting highly sensitive data, without the performance or management overhead of separate file system encryption. The Encrypted storage engine is based upon WiredTiger and so throughout this whitepaper, statements regarding WiredTiger also apply to the Encrypted storage engine. This engine is part of MongoDB Enterprise Advanced. The In-Memory storage engine for applications that have extremely strict SLAs for consistent and predictable low latency, while not requiring disk durability for the data. This engine is part of MongoDB Enterprise Advanced. The MMAPv1 engine, an improved version of the storage engine used in pre-3.x MongoDB releases. MMAPv1 was the default storage engine in MongoDB 3.0. MongoDB allows users to mix and match multiple storage engines within a single MongoDB cluster. This flexibility provides a simple and reliable approach to support diverse workloads. Traditionally, multiple database technologies would need to be managed to meet these needs, with complex, custom integration code to move data between technologies, and to ensure consistent, secure access. With MongoDB’s flexible storage architecture, the database automatically manages the movement of data between storage engine technologies using native replication. This approach significantly reduces developer and operational complexity when compared to running multiple distinct database technologies. **Table 1:** Comparing the MongoDB WiredTiger, In-Memory, Encrypted, and MMAPv1 storage engines **WiredTiger Storage Engine** MongoDB acquired WiredTiger in 2014, and with it the experts behind the WiredTiger storage engine: co-founders Keith Bostic (founder of Sleepycat Software) and Dr. Michael Cahill, and their colleagues. Bostic and Cahill were the original architects of Berkeley DB, the most widely-used embedded data management software in the world, and have decades of experience writing high performance storage engines. WiredTiger leverages modern hardware architectures and innovative software algorithms to provide industry-leading performance for the most demanding applications. WiredTiger is ideal for wide range of operational applications and is therefore MongoDB’s default storage engine. It should be the starting point for all new applications, with the exception of cases where you need the specific capabilities of the In-Memory or Encrypted storage engines. The key advantages of WiredTiger include: Maximize Available Cache: WiredTiger maximizes use of available memory as cache to reduce I/O bottlenecks. There are two caches that are used: the WiredTiger cache and the filesystem cache. The WiredTiger cache stores uncompressed data and provides in-memory-like performance. The operating system’s filesystem cache stores compressed data. When data is not found in the WiredTiger cache, WiredTiger will look for the data in the filesystem cache. **Figure 2:** WiredTiger Caches (WiredTiger Cache and FS Cache) Data found in the filesystem cache first goes through a decompression process before moving to the WiredTiger cache. The WiredTiger cache performs best when it holds as much of the working set as possible. However, it is also important to reserve memory for other processes that need it such as the operating system, including the filesystem cache. This also includes MongoDB itself, which as a whole will consume more memory than what is in active use by WiredTiger. MongoDB defaults to a WiredTiger cache size of approximately 60% of RAM. The minimum amount to leave the filesystem cache is at 20% of available memory. Anything lower and the operating system may be constrained for resources. High Throughput: WiredTiger uses “copy on write” — when a document is updated WiredTiger will make a new copy of the document and determine the latest version to return to the reader. This approach allows multiple clients to simultaneously modify different documents in a collection, resulting in higher concurrency and throughput. Optimum write performance is achieved when an application is utilizing a host with many cores (the more the better), and multiple threads are writing to different documents. Reducing Storage Footprint and Improving Disk IOPs: WiredTiger uses compression algorithms to reduce the amount of data stored on disk. Not only is storage reduced, but IOPs performance is increased as fewer bits are read from or written to disk. Some types of files compress better than others. Text files are highly compressible, while binary data may not be as compressible since it may already be encoded and compressed. WiredTiger does incur additional CPU cycles when using compression, but users can configure compression schemes to optimize CPU overhead vs. compression ratio. Snappy, which is the default compression engine, provides good balance between high compression ratio with low CPU overhead. Zlib will achieve higher compression ratios, but incur additional CPU cycles. Compression (Indexes and Journals): Indexes can be compressed in memory as well as on disk. WiredTiger utilizes prefix compression to compress the indexes, conserving RAM usage as well as freeing up storage IOPs. Journals are compressed by default with Snappy compression. Multi-Core Scalability: As CPU manufacturers shrink to smaller lithographies and power consumption becomes more and more of an issue, processor trends have shifted to multi-core architectures in order to sustain the cadence of Moore’s law. WiredTiger was designed with modern, multi-core architectures in mind, and provides scalability across multi-core systems. Programming techniques such as hazard pointers, lock free algorithms, and fast latching minimize contention between threads. Threads can perform operations without blocking each other — resulting in less thread contention, better concurrency, and higher throughput. Read Concern: WiredTiger allows users to specify a level of isolation for their reads. Read operations can return a view of data that has been accepted or committed to disk by a majority of the replica set. This provides a guarantee that applications only read data that will persist in the event of failure and won’t get rolled back when a new replica set member is promoted to primary. For more information on migrating from MMAP/MMAPv1 to WiredTiger here is the documentation. **Encrypted Storage Engine** Data security is top of mind for many executives due to increased attacks as well as a series of data breaches in recent years that have negatively impacted several high profile brands. For example, in 2015, a major health insurer was a victim of a massive data breach in which criminals gained access to the Social Security numbers of more than 80 million people — resulting in an estimated cost of $100M. In the end, one of the critical vulnerabilities was the health insurer did not encrypt sensitive patient data stored at-rest. Coupled with MongoDB’s extensive access control and auditing capabilities, encryption is a vital component in building applications that are compliant with standards such as HIPAA, FERPA, PCI, SOX, GLBA, ISO 27001, etc. The Encrypted storage engine is based on WiredTiger, and thus is designed for operational efficiency and performance: Document level concurrency control and compression Support for Intel’s AES-NI equipped CPUs for acceleration of the encryption/decryption process As documents are modified, only updated storage blocks need to be encrypted rather than the entire database With the Encrypted storage engine, protection of data at-rest is an integral feature of the database. The raw database “plaintext” content is encrypted using an algorithm that takes a random encryption key as input and generates ciphertext that can only be decrypted with the proper key. The Encrypted Storage Engine supports a variety of encryption algorithms from the OpenSSL library. AES-256 in CBC mode is the default, while other options include AES-256 in GCM mode, as well as FIPS mode for FIPS-140-2 compliance. Encryption is performed at the page level to provide optimal performance. Instead of having to encrypt/decrypt the entire file or database for each change, only the modified pages need to be encrypted or decrypted, resulting in less overhead and higher performance. Additionally, the Encrypted Storage Engine provides safe and secure management of the encryption keys. Each encrypted node contains an internal database key that is used to encrypt or decrypt the data files. The internal database key is wrapped with an external master key, which must be provided to the node for it to initialize. To ensure that keys are never written or paged to disk in unencrypted form, MongoDB uses operating system protection mechanisms, such as VirtualLock and mlock, to lock the process’ virtual memory space into memory. There are two primary ways to manage the master key: through an integration with a third party key management appliance via the Key Management Interoperability Protocol (KMIP) or local key management via a keyfile. Most regulatory requirements mandate that the encryption keys be rotated and replaced with a new key at least once annually. MongoDB can achieve key rotation without incurring downtime by performing rolling restarts of the replica set. When using a KMIP appliance, the database files themselves do not need to be re-encrypted, thereby avoiding the significant performance overhead imposed by key rotation in other databases. Only the master key is rotated, and the internal database keystore is re-encrypted. It is recommended to use a KMIP appliance with the Encrypted storage engine. **In-Memory Storage Engine** In modern applications, different subsets of application data have different latency and durability requirements. The In-Memory storage engine option is created for applications that have extremely strict SLAs even at 99th percentiles. The In-Memory engine will keep all of the data in memory, and will not write anything to disk. Data always has to be populated on start-up, and nothing can be assumed to be present on restart, including application data and system data (i.e users, permissions, index definitions, oplog, etc). All data must fit into the specified in-memory cache size. The In-Memory storage engine combines the predictable latency benefits of an “in memory cache” with the rich query and analytical capabilities of MongoDB. It has the advantage of using the exact same APIs as any other MongoDB server so your applications do not need special code to interact with the cache, such as handling cache invalidation as data is updated. In addition, a mongod that's configured with the In-Memory storage engine can be part of a replica set, and thus can have another node in the same replica set backed by fast persistent storage. The In-Memory engine is currently supported on MongoDB 3.2.6+. For performance metrics on the In-Memory storage engine view the MongoDB Pluggable Storage Engine white paper. For applications requiring predictable latencies, the In-Memory engine is the recommended storage engine as it provides low latency while also minimizing tail latencies resulting in high performance and a consistent user experience. Some of the key benefits of the In-Memory engine: Predictable and consistent latency for applications that want to minimize latency spikes Applications can combine separate caching and database layers into a single layer— all accessed and managed with the same APIs, operational tools, and security controls Data redundancy with use of a WiredTiger secondary node in a replica set **MMAPv1 Storage Engine** The MMAPv1 engine is an improved version of the storage engine used in pre 3.x MongoDB releases. It utilizes collection level concurrency and memory mapped files to access the underlying data storage. Memory management is delegated to the operating system. This prevents compression of collection data, though journals are compressed with Snappy. In the second part of this blog series, we will discuss how to select which storage engine to use. Learn more about MongoDB’s pluggable storage engines. Read the whitepaper. Pluggable Storage Engine Architecture About the author - Jason Ma Jason Ma is a Principal Product Marketing Manager based in Palo Alto, and has extensive experience in technology hardware and software. He previously worked for SanDisk in Corporate Strategy doing M&A and investments, and as a Product Manager on the Infiniflash All-Flash JBOF. Before SanDisk, he worked as a HW engineer at Intel and Boeing. Jason has a BSEE from UC San Diego, MSEE from the University of Southern California, and an MBA from UC Berkeley.
Building Applications with MongoDB's Pluggable Storage Engines: Part 2
In the previous post, I discussed MongoDB’s pluggable storage engine architecture and characteristics of each storage engine. In this post, I will talk about how to select which storage engine to use, as well as mixing and matching storage engines in a replica set. **How To Select Which Storage Engine To Use** WiredTiger Workloads WiredTiger will be the storage engine of choice for most workloads. WiredTiger’s concurrency and excellent read and write throughput is well suited for applications requiring high performance: IoT applications: sensor data ingestion and analysis Customer data management and social apps: updating all user interactions and engagement from multiple activity streams Product catalogs, content management, real-time analytics For most workloads, it is recommended to use WiredTiger. The rest of the whitepaper will discuss situations where other storage engines may be applicable. Encrypted Workloads The Encrypted storage engine is ideally suited to be used in regulated industries such as finance, retail, healthcare, education, and government. Enterprises that need to build compliant applications with PCI DSS, HIPAA, NIST, FISMA, STIG, or other regulatory initiatives can use the Encrypted storage engine with native MongoDB security features such as authorization, access controls, authentication, and auditing to achieve compliance. Before MongoDB 3.2, the primary methods to provide encryption-at-rest were to use 3rd party applications that encrypt files at the application, file system, or disk level. These methods work well with MongoDB but tend to add extra cost, complexity, and overhead. The Encrypted Storage engine adds ~15% overhead compared to WiredTiger, as available CPU cycles are allocated to the encryption/decryption process – though the actual impact will be dependent on your data set and workload. This is still significantly less compared to 3rd party disk and file system encryption, where customers have noticed 25% overhead or more. More information about the performance benchmark of the Encrypted storage engine can be found here. The Encrypted storage engine, combined with MongoDB native security features such as authentication, authorization, and auditing provides an end to end security solution to safeguard data with minimal performance impact. In-Memory Workloads The advantages of in-memory computing are well understood. Data can be accessed in RAM nearly 100,000 times faster than retrieving it from disk, delivering orders-of-magnitude higher performance for the most demanding applications. With RAM prices continuing to tumble and new technologies such as 3D non-volatile memory on the horizon, the performance gains can now be realized with better and improving economics than ever before. Not only is fast access important, but predictable access, or latency, is essential for certain modern day applications. For example, financial trading applications need to respond quickly to fluctuating market conditions as data flows through trading systems. Unpredictable latency outliers can mean the difference between making or losing millions of dollars. While WiredTiger will be more than capable for most use cases, applications requiring predictable latency will benefit the most from the In-Memory storage engine. Enterprises can harness the power of MongoDB core capabilities (expressive query language, primary and secondary indexes, scalability, high availability) with the benefits of predictable latency from the In-Memory storage engine. Examples of when to use the In-Memory engine are: Financial: Algorithmic trading applications that are highly sensitive to predictable latency; such as when latency spikes from high traffic volumes can overwhelm a trading system and cause transactions to be lost or require re-transmission Real-time monitoring systems that detect anomalies such as fraud Applications that require predictable latency for processing of trade orders, credit card authorizations, and other high-volume transactions Government: Sensor data management and analytics applications interested in spatially and temporally correlated events that need to be contextualized with real time sources (weather, social networking, traffic, etc) Security threat detection ECommerce / Retail: Session data of customer profiles during a purchase Product search cache Personalized recommendations in real time Online Gaming: First person shooter games Caching of player data Telco: Real-time processing and caching of customer information and data plans Tracking network usage for millions of users and performing real-time actions such as billing Managing user sessions in real time to create personalized experiences on any mobile device MMAPv1 Workloads Though WiredTiger is better suited for most application workloads, there are certain situations where users may want to remain on MMAPv1: Legacy Workloads: Enterprises that are upgrading to the latest MongoDB releases (3.0 and 3.2) and don’t want to re-qualify their applications with a new storage engine may prefer to remain with MMAPv1. Version Downgrade: The upgrade process from MMAP/MMAPv1 to WiredTiger is a simple binary compatible “drop in” upgrade, but once upgraded to MongoDB 3.0 or 3.2 users cannot downgrade to a version lower than 2.6.8. This should be kept in mind for users that want to stay on version 2.6. There have been many added features included in MongoDB since version 2.6, thus it is highly recommended to upgrade to version 3.2. Mixed Storage Engine Use Cases MongoDB’s flexible storage architecture provides a powerful option to optimize your database. Storage engines can be mixed and matched within a single MongoDB cluster to meet diverse application needs for data. Users can evaluate different storage engines without impacting deployments and can also easily migrate and upgrade to a new storage engine following the rolling upgrade process. To simplify this process even further, users can utilize Ops or Cloud manager to upgrade their cluster’s version of MongoDB through a click of a button. Though there are many possible mixed storage configurations, here are a few examples of mixed storage engine configurations with the In-Memory and WiredTiger engines. **Figure 10**: eCommerce application with mixed storage engines Since the In-Memory storage engine does not persist data as a standalone node, it can be used with another storage engine to persist data in a mixed storage engine configuration. The eCommerce application in Figure 10 uses two sharded clusters with three nodes (1 primary, 2 secondaries) in each cluster. The replica set with the In-Memory engine as the primary node provides low latency access and high throughput to transient user data such as session information, shopping cart items, and recommendations. The application’s product catalog is stored in the sharded cluster with WiredTiger as the primary node. Product searches can utilize the WiredTiger in-memory cache for low latency access. If the product catalog’s data storage requirements exceed server memory capacity, data can be stored and retrieved from disk. This tiered approach enables “hot” data to be accessed and modified quickly in real time, while persisting “cold” data to disk. The configuration in Figure 11 demonstrates how to preserve low latency capabilities in a cluster after failover. Setting priority=1 in the secondary In-Memory node will result in automatic failover to that secondary, and eliminate the need to fully repopulate the failed primary when it comes back online. Additionally, if the transient data needs to be persisted then a secondary WiredTiger node can be configured to act as a replica, providing high availability and disk durability. !(https://webassets.mongodb.com/_com_assets/cms/StorageEngineIMG12-pmr796cxcj.png) **Figure 11:** Mixed storage engines, with hidden WiredTiger secondary To provide even higher availability and durability a five node replica set with two In-Memory and three WiredTiger nodes can be used. In Figure 12, the In-Memory engine is the primary node, with four secondary nodes. If a failure to the primary occurs, the secondary In-Memory node will automatically failover as the primary and there will be no need to repopulate the cache. If the new primary In-Memory node also fails, then the replica set will elect a WiredTiger node as primary. This mitigates any disruption in operation as clients will still be able to write uninterrupted to the new WiredTiger primary. !(https://webassets.mongodb.com/_com_assets/cms/StorageEngineIMG13-qs7y9kdkab.png) **Figure 12:** Mixed storage engines with five node replica set Additionally, a mixed storage engine approach is ideally suited for a microservices architecture. In a microservice architecture, a shared database between services can affect multiple services and slow down development. By decoupling the database and selecting the right storage engines for specific workloads, enterprises can improve performance and quickly develop features for individual services. Learn more about MongoDB and microservices. Conclusion MongoDB is the next generation database used by the world’s most sophisticated organizations, from cutting-edge startups to the largest companies, to create applications never possible at the fraction of the cost of legacy databases. With pluggable storage engine APIs, MongoDB continues to innovate and provide users the ability to choose the most optimal storage engines for their workloads. Now, enterprises have an even richer ecosystem of storage options to solve new classes of use cases with a single database framework. Pluggable Storage Engine Architecture If guidance is needed on upgrading to MongoDB 3.2, MongoDB offers consulting to help ensure a smooth transition without interruption.