MongoDB Developer Blog

Deep dives into technical concepts, architectures, and innovations with MongoDB.

The Cost of Not Knowing MongoDB, Part 3: appV6R0 to appV6R4

Welcome to the third and final part of the series "The Cost of Not Knowing MongoDB." Building upon the foundational optimizations explored in Part 1 and Part 2 , this article delves into advanced MongoDB design patterns that can dramatically transform application performance. In Part 1, we improved application performance by concatenating fields, changing data types, and shortening field names. In Part 2, we implemented the Bucket Pattern and Computed Pattern and optimized the aggregation pipeline to achieve even better performance. In this final article, we address the issues and improvements identified in appV5R4 . Specifically, we focus on reducing the document size in our application to alleviate the disk throughput bottleneck on the MongoDB server. This reduction will be accomplished by adopting a dynamic schema and modifying the storage compression algorithm. All the application versions and revisions from this article were developed by a senior MongoDB developer, as they are built on all the previous versions and utilize the Dynamic Schema pattern, which isn't very common to see. Application version 6 revision 0 (appV6R0): A dynamic monthly bucket document As mentioned in the Issues and Improvements of appV5R4 from the previous article , the primary limitation of our MongoDB server is its disk throughput. To address this, we need to reduce the size of the documents being stored. Consider the following document from appV5R3, which has provided the best performance so far: const document = { _id: Buffer.from("...01202202"), items: [ { date: new Date("2022-06-05"), a: 10, n: 3 }, { date: new Date("2022-06-16"), p: 1, r: 1 }, { date: new Date("2022-06-27"), a: 5, r: 1 }, { date: new Date("2022-06-29"), p: 1 }, ], }; The items array in this document contains only four elements, but on average, it will have around 10 elements, and in the worst-case scenario, it could have up to 90 elements. These elements are the primary contributors to the document size, so they should be the focus of our optimization efforts. One commonality among the elements is the presence of the date field, with its value including the year and month, for the previous document. By rethinking how this field and its value could be stored, we can reduce storage requirements. An unconventional solution we could use is: Changing the items field type from an array to a document. Using the date value as the field name in the items document. Storing the status totals as the value for each date field. Here is the previous document represented using the new schema idea: const document = { _id: Buffer.from("...01202202"), items: { 20220605: { a: 10, n: 3 }, 20220616: { p: 1, r: 1 }, 20220627: { a: 5, r: 1 }, 20220629: { p: 1 }, }, }; While this schema may not significantly reduce the document size compared to appV5R3, we can further optimize it by leveraging the fact that the year is already embedded in the _id field. This eliminates the need to repeat the year in the field names of the items document. With this approach, the items document adopts a Dynamic Schema, where field names encode information and are not predefined. To demonstrate various implementation possibilities, we will revisit all the bucketing criteria used in the appV5RX implementations, starting with appV5R0. For appV6R0, which builds upon appV5R0 but uses a dynamic schema, data is bucketed by year and month. The field names in the items document represent only the day of the date, as the year and month are already stored in the _id field. A detailed explanation of the bucketing logic and functions used to implement the current application can be found in the appV5R0 introduction . The following document stores data for January 2022 (2022-01-XX), applying the newly presented idea: const document = { _id: Buffer.from("...01202201"), items: { "05": { a: 10, n: 3 }, 16: { p: 1, r: 1 }, 27: { a: 5, r: 1 }, 29: { p: 1 }, }, }; Schema The application implementation presented above would have the following TypeScript document schema denominated SchemaV6R0: export type SchemaV6R0 = { _id: Buffer; items: Record< string, { a?: number; n?: number; p?: number; r?: number; } >; }; Bulk upsert Based on the specification presented, we have the following updateOne operation for each event generated by this application version: const DD = getDD(event.date); // Extract the `day` from the `event.date` const operation = { updateOne: { filter: { _id: buildId(event.key, event.date) }, // key + year + month update: { $inc: { [`items.${DD}.a`]: event.approved, [`items.${DD}.n`]: event.noFunds, [`items.${DD}.p`]: event.pending, [`items.${DD}.r`]: event.rejected, }, }, upsert: true, }, }; filter: Target the document where the _id field matches the concatenated value of key, year, and month. The buildId function converts the key+year+month into a binary format. update: Uses the $inc operator to increment the fields corresponding to the same DD as the event by the status values provided. If a field does not exist in the items document and the event provides a value for it, $inc treats the non-existent field as having a value of 0 and performs the operation. If a field exists in the items document but the event does not provide a value for it (i.e., undefined), $inc treats it as 0 and performs the operation. upsert: Ensures a new document is created if no matching document exists. Get reports To fulfill the Get Reports operation, five aggregation pipelines are required, one for each date interval. Each pipeline follows the same structure, differing only in the filtering criteria in the $match stage: const pipeline = [ { $match: docsFromKeyBetweenDate }, { $addFields: buildTotalsField }, { $group: groupSumTotals }, { $project: { _id: 0 } }, ]; The complete code for this aggregation pipeline is quite complicated. Because of that, we will have just a pseudocode for it here. 1: { $match: docsFromKeyBetweenDate } Range-filters documents by _id to retrieve only buckets within the report date range. It has the same logic as appV5R0. 2: { $addFields: buildTotalsField } The logic is similar to the one used in the Get Reports of appV5R3. The $objectToArray operator is used to convert the items document into an array, enabling a $reduce operation. Filtering the items fields within the report's range involves extracting the year and month from the _id field and the day from the field names in the items document. The following JavaScript code is logic equivalent to the real aggregation pipeline code. // Equivalent JavaScript logic: const [MM] = _id.slice(-2).toString(); // Get month from _id const [YYYY] = _id.slice(-6, -2).toString(); // Get year from _id const items_array = Object.entries(items); // Convert the object to an array of [key, value] const totals = items_array.reduce( (accumulator, [DD, status]) => { let statusDate = new Date(`${YYYY}-${MM}-${DD}`); if (statusDate >= reportStartDate && statusDate < reportEndDate) { accumulator.a += status.a || 0; accumulator.n += status.n || 0; accumulator.p += status.p || 0; accumulator.r += status.r || 0; } return accumulator; }, { a: 0, n: 0, p: 0, r: 0 } ); 3: { $group: groupCountTotals } Group the totals of each document in the pipeline into final status totals using $sum operations. 4: { $project: { _id: 0 } } Format the resulting document to have the reports format. Indexes No additional indexes are required, maintaining the single _id index approach established in the appV4 implementation. Initial scenario statistics Collection statistics To evaluate the performance of appV6R0, we inserted 500 million event documents into the collection using the schema and Bulk Upsert function described earlier. For comparison, the tables below also include statistics from previous comparable application versions: table, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 5px; } Collection Documents Data Size Document Size Storage Size Indexes Index Size appV5R0 95,350,431 19.19GB 217B 5.06GB 1 2.95GB appV5R3 33,429,492 11.96GB 385B 3.24GB 1 1.11GB appV6R0 95,350,319 11.1GB 125B 3.33GB 1 3.13GB Event statistics To evaluate the storage efficiency per event, the Event Statistics are calculated by dividing the total data size and index size by the 500 million events. table, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 5px; } Collection Data Size/Events Index Size/Events Total Size/Events appV5R0 41.2B 6.3B 47.5B appV5R3 25.7B 2.4B 28.1B appV6R0 23.8B 6.7B 30.5B It is challenging to make a direct comparison between appV6R0 and appV5R0 from a storage perspective. The appV5R0 implementation is the simplest bucketing possible, where event documents were merely appended to the items array without bucketing by day, as is done in appV6R0. However, we can attempt a comparison between appV6R0 and appV5R3, the best solution so far. In appV6R0, data is bucketed by month, whereas in appV5R3, it is bucketed by quarter. Assuming document size scales linearly with the bucketing criteria (though this is not entirely accurate), the appV6R0 document would be approximately 3 * 125 = 375 bytes, which is 9.4% smaller than appV5R3. Another indicator of improvement is the Data Size/Events metric in the Event Statistics table. For appV6R0, each event uses an average of 23.8 bytes, compared to 27.7 bytes for appV5R3, representing a 14.1% reduction in size. Load test results Executing the load test for appV6R0 and plotting it alongside the results for appV5R0 and Desired rates, we have the following results for Get Reports and Bulk Upsert. Get Reports rates The two versions exhibit very similar rate performance, with appV6R0 showing slight superiority in the second and third quarters, while appV5R0 is superior in the first and fourth quarters. Figure 1. Graph showing the rates of appV5R0 and appV6R0 when executing the load test for Get Reports functionality. Both have similar performance, but without reaching the desired rates. Get Reports latency The two versions exhibit very similar latency performance, with appV6R0 showing slight advantages in the second and third quarters, while appV5R0 is superior in the first and fourth quarters. Figure 2. Graph showing the latency of appV5R0 and appV6R0 when executing the load test for Get Reports functionality. appV5R0 has lower latency than appV6R0. Bulk Upsert rates Both versions have similar rate values, but it can be seen that appV6R0 has a small edge compared to appV5R0. Figure 3. Graph showing the rates of appV5R0 and appV6R0 when executing the load test for Bulk Upsert functionality. appV6R0 has better rates than appV5R0, but without reaching the desired rates. Bulk Upsert latency Although both versions have similar latency values for the first quarter of the test, for the final three-quarters, appV6R0 has a clear advantage over appV5R0. Figure 4. Graph showing the latency of appV5R0 and appV6R0 when executing the load test for Bulk Upsert functionality. appV6R0 has lower latency than appV5R0. Performance summary Despite the significant reduction in document and storage size achieved by appV6R0, the performance improvement was not as substantial as expected. This suggests that the bottleneck in the application when bucketing data by month may not be related to disk throughput. Examining the collection stats table reveals that the index size for both versions is close to 3GB. This is near the 4GB of available memory on the machine running the database and exceeds the 1.5GB allocated by WiredTiger for cache . Therefore, it is likely that the limiting factor in this case is memory/cache rather than document size, which explains the lack of a significant performance improvement. Issues and improvements To address the limitations observed in appV6R0, we propose adopting the same line of improvements applied from appV5R0 to appV5R1. Specifically, we will bucket the events by quarter in appV6R1. This approach not only follows the established pattern of enhancements but also aligns with the need to optimize performance further. As highlighted in the Load Test Results, the current bottleneck lies in the size of the index relative to the available cache/memory. By increasing the bucketing interval from month to quarter, we can reduce the number of documents by approximately a factor of three. This reduction will, in turn, decrease the number of index entries by the same factor, leading to a smaller index size. Application version 6 revision 1 (appV6R1): A dynamic quarter bucket document As discussed in the previous Issues and Improvements section, the primary bottleneck in appV6R0 was the index size nearing the memory capacity of the machine running MongoDB. To mitigate this issue, we propose increasing the bucketing interval from a month to a quarter for appV6R1, following the approach used in appV5R1. This adjustment aims to reduce the number of documents and index entries by approximately a factor of three, thereby decreasing the overall index size. By adopting a quarter-based bucketing strategy, we align with the established pattern of enhancements applied in appV5R1 versions while addressing the specific memory/cache constraints identified in appV6R0. The implementation of appV6R1 retains most of the code from appV6R0, with the following key differences: The _id field will now be composed of key+year+quarter. The field names in the items document will encode both month and day, as this information is necessary for filtering date ranges in the Get Reports operation. The following example demonstrates how data for June 2022 (2022-06-XX), within the second quarter (Q2), is stored using the new schema: const document = { _id: Buffer.from("...01202202"), items: { "0605": { a: 10, n: 3 }, "0616": { p: 1, r: 1 }, "0627": { a: 5, r: 1 }, "0629": { p: 1 }, }, }; Schema The application implementation presented above would have the following TypeScript document schema denominated SchemaV6R0: export type SchemaV6R0 = { _id: Buffer; items: Record< string, { a?: number; n?: number; p?: number; r?: number; } >; }; Bulk upsert Based on the specification presented, we have the following updateOne operation for each event generated by this application version: const MMDD = getMMDD(event.date); // Extract the month (MM) and day(DD) from the `event.date` const operation = { updateOne: { filter: { _id: buildId(event.key, event.date) }, // key + year + quarter update: { $inc: { [`items.${MMDD}.a`]: event.approved, [`items.${MMDD}.n`]: event.noFunds, [`items.${MMDD}.p`]: event.pending, [`items.${MMDD}.r`]: event.rejected, }, }, upsert: true, }, }; This updateOne operation has a similar logic to the one in appV6R0, with the only differences being the filter and update criteria. filter: Target the document where the _id field matches the concatenated value of key, year, and quarter. The buildId function converts the key+year+quarter into a binary format. update: Uses the $inc operator to increment the fields corresponding to the same MMDD as the event by the status values provided. Get reports To fulfill the Get Reports operation, five aggregation pipelines are required, one for each date interval. Each pipeline follows the same structure, differing only in the filtering criteria in the $match stage: const pipeline = [ { $match: docsFromKeyBetweenDate }, { $addFields: buildTotalsField }, { $group: groupSumTotals }, { $project: { _id: 0 } }, ]; This aggregation operation has a similar logic to the one in appV6R0, with the only differences being the implementation in the $addFields stage. { $addFields: itemsReduceAccumulator }: A similar implementation to the one in appV6R0 The difference relies on extracting the value of year (YYYY) from the _id field and the month and day (MMDD) from the field name. The following JavaScript code is logic equivalent to the real aggregation pipeline code. const [YYYY] = _id.slice(-6, -2).toString(); // Get year from _id const items_array = Object.entries(items); // Convert the object to an array of [key, value] const totals = items_array.reduce( (accumulator, [MMDD, status]) => { let [MM, DD] = [MMDD.slice(0, 2), MMDD.slice(2, 4)]; let statusDate = new Date(`${YYYY}-${MM}-${DD}`); if (statusDate >= reportStartDate && statusDate < reportEndDate) { accumulator.a += status.a || 0; accumulator.n += status.n || 0; accumulator.p += status.p || 0; accumulator.r += status.r || 0; } return accumulator; }, { a: 0, n: 0, p: 0, r: 0 } ); Indexes No additional indexes are required, maintaining the single _id index approach established in the appV4 implementation. Initial scenario statistics Collection statistics To evaluate the performance of appV6R1, we inserted 500 million event documents into the collection using the schema and Bulk Upsert function described earlier. For comparison, the tables below also include statistics from previous comparable application versions: table, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 5px; } Collection Documents Data Size Document Size Storage Size Indexes Index Size appV5R3 33,429,492 11.96GB 385B 3.24GB 1 1.11GB appV6R0 95,350,319 11.1GB 125B 3.33GB 1 3.13GB appV6R1 33,429,366 8.19GB 264B 2.34GB 1 1.22GB Event statistics To evaluate the storage efficiency per event, the Event Statistics are calculated by dividing the total data size and index size by the 500 million events. table, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 5px; } Collection Data Size/Events Index Size/Events Total Size/Events appV5R3 25.7B 2.4B 28.1B appV6R0 23.8B 6.7B 30.5B appV6R1 17.6B 2.6B 20.2B In the previous Initial Scenario Statistics analysis, we assumed that document size would scale linearly with the bucketing range. However, this assumption proved inaccurate. The average document size in appV6R1 is approximately twice as large as in appV6R0, even though it stores three times more data. Already a win for this new implementation. Since appV6R1 buckets data by quarter at the document level and by day within the items sub-document, a fair comparison would be with appV5R3, the best-performing version so far. From the tables above, we observe a significant improvement in Document Size and consequently Data Size when transitioning from appV5R3 to appV6R1. Specifically, there was a 31.4% reduction in Document Size. From an index size perspective, there was no change, as both versions bucket events by quarter. Load test results Executing the load test for appV6R0 and plotting it alongside the results for appV5R0 and Desired rates, we have the following results for Get Reports and Bulk Upsert. Get Reports rates For the first three-quarters of the test, both versions have similar rate values, but, for the final quarter, appV6R1 has a notable edge over appV5R3. Figure 5. Graph showing the rates of appV5R3 and appV6R1 when executing the load test for Get Reports functionality. appV5R3 has better rates than appV6R1, but without reaching the desired rates. Get Reports latency The two versions exhibit very similar latency performance, with appV6R0 showing slight advantages in the second and third quarters, while appV5R0 is superior in the first and fourth quarters. Figure 6. Graph showing the latency of appV5R0 and appV6R0 when executing the load test for Get Reports functionality. appV5R0 has lower latency than appV6R0. Bulk Upsert rates Both versions have similar rate values, but it can be seen that appV6R0 has a small edge compared to appV5R0. Figure 7. Graph showing the rates of appV5R0 and appV6R0 when executing the load test for Bulk Upsert functionality. appV6R0 has better rates than appV5R0, but without reaching the desired rates. Bulk Upsert latency Although both versions have similar latency values for the first quarter of the test, for the final three-quarters, appV6R0 has a clear advantage over appV5R0. Figure 8. Graph showing the latency of appV5R0 and appV6R0 when executing the load test for Bulk Upsert functionality. appV6R0 has lower latency than appV5R0. Performance summary Despite the significant reduction in document and storage size achieved by appV6R0, the performance improvement was not as substantial as expected. This suggests that the bottleneck in the application when bucketing data by month may not be related to disk throughput. Examining the collection stats table reveals that the index size for both versions is close to 3GB. This is near the 4GB of available memory on the machine running the database and exceeds the 1.5GB allocated by WiredTiger for cache . Therefore, it is likely that the limiting factor in this case is memory/cache rather than document size, which explains the lack of a significant performance improvement. Issues and improvements To address the limitations observed in appV6R0, we propose adopting the same line of improvements applied from appV5R0 to appV5R1. Specifically, we will bucket the events by quarter in appV6R1. This approach not only follows the established pattern of enhancements but also aligns with the need to optimize performance further. As highlighted in the Load Test Results, the current bottleneck lies in the size of the index relative to the available cache/memory. By increasing the bucketing interval from month to quarter, we can reduce the number of documents by approximately a factor of three. This reduction will, in turn, decrease the number of index entries by the same factor, leading to a smaller index size. Application version 6 revision 1 (appV6R1): A dynamic quarter bucket document As discussed in the previous Issues and Improvements section, the primary bottleneck in appV6R0 was the index size nearing the memory capacity of the machine running MongoDB. To mitigate this issue, we propose increasing the bucketing interval from a month to a quarter for appV6R1, following the approach used in appV5R1. This adjustment aims to reduce the number of documents and index entries by approximately a factor of three, thereby decreasing the overall index size. By adopting a quarter-based bucketing strategy, we align with the established pattern of enhancements applied in appV5R1 versions while addressing the specific memory/cache constraints identified in appV6R0. The implementation of appV6R1 retains most of the code from appV6R0, with the following key differences: The _id field will now be composed of key+year+quarter. The field names in the items document will encode both month and day, as this information is necessary for filtering date ranges in the Get Reports operation. The following example demonstrates how data for June 2022 (2022-06-XX), within the second quarter (Q2), is stored using the new schema: const document = { _id: Buffer.from("...01202202"), items: { "0605": { a: 10, n: 3 }, "0616": { p: 1, r: 1 }, "0627": { a: 5, r: 1 }, "0629": { p: 1 }, }, }; Schema The application implementation presented above would have the following TypeScript document schema denominated SchemaV6R0: export type SchemaV6R0 = { _id: Buffer; items: Record< string, { a?: number; n?: number; p?: number; r?: number; } >; }; Bulk upsert Based on the specification presented, we have the following updateOne operation for each event generated by this application version: const MMDD = getMMDD(event.date); // Extract the month (MM) and day(DD) from the `event.date` const operation = { updateOne: { filter: { _id: buildId(event.key, event.date) }, // key + year + quarter update: { $inc: { [`items.${MMDD}.a`]: event.approved, [`items.${MMDD}.n`]: event.noFunds, [`items.${MMDD}.p`]: event.pending, [`items.${MMDD}.r`]: event.rejected, }, }, upsert: true, }, }; This updateOne operation has a similar logic to the one in appV6R0, with the only differences being the filter and update criteria. filter: Target the document where the _id field matches the concatenated value of key, year, and quarter. The buildId function converts the key+year+quarter into a binary format. update: Uses the $inc operator to increment the fields corresponding to the same MMDD as the event by the status values provided. Get reports To fulfill the Get Reports operation, five aggregation pipelines are required, one for each date interval. Each pipeline follows the same structure, differing only in the filtering criteria in the $match stage: const pipeline = [ { $match: docsFromKeyBetweenDate }, { $addFields: buildTotalsField }, { $group: groupSumTotals }, { $project: { _id: 0 } }, ]; This aggregation operation has a similar logic to the one in appV6R0, with the only differences being the implementation in the $addFields stage. { $addFields: itemsReduceAccumulator }: A similar implementation to the one in appV6R0 The difference relies on extracting the value of year (YYYY) from the _id field and the month and day (MMDD) from the field name. The following JavaScript code is logic equivalent to the real aggregation pipeline code. const [YYYY] = _id.slice(-6, -2).toString(); // Get year from _id const items_array = Object.entries(items); // Convert the object to an array of [key, value] const totals = items_array.reduce( (accumulator, [MMDD, status]) => { let [MM, DD] = [MMDD.slice(0, 2), MMDD.slice(2, 4)]; let statusDate = new Date(`${YYYY}-${MM}-${DD}`); if (statusDate >= reportStartDate && statusDate < reportEndDate) { accumulator.a += status.a || 0; accumulator.n += status.n || 0; accumulator.p += status.p || 0; accumulator.r += status.r || 0; } return accumulator; }, { a: 0, n: 0, p: 0, r: 0 } ); Indexes No additional indexes are required, maintaining the single _id index approach established in the appV4 implementation. Initial scenario statistics Collection statistics To evaluate the performance of appV6R1, we inserted 500 million event documents into the collection using the schema and Bulk Upsert function described earlier. For comparison, the tables below also include statistics from previous comparable application versions: table, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 5px; } Collection Documents Data Size Document Size Storage Size Indexes Index Size appV5R3 33,429,492 11.96GB 11.96GB 3.24GB 1 1.11GB appV6R1 33,429,366 8.19GB 264B 2.34GB 1 1.22GB appV6R2 33,429,207 9.11GB 293B 2.8GB 1 1.26GB Event statistics To evaluate the storage efficiency per event, the Event Statistics are calculated by dividing the total data size and index size by the 500 million events. table, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 5px; } Collection Data Size/Events Index Size/Events Total Size/Events appV5R3 25.7B 2.4B 28.1B appV6R1 17.6B 2.6B 20.2B appV6R2 19.6B 2.7B 22.3B As expected, we had an 11.2% increase in the Document Size by adding a totals field in each document of appV6R2. When comparing to appV5R3, we still have a reduction of 23.9% in the Document Size. Let's review the Load Test Results to see if the trade-off between storage and computation cost is worthwhile. Load test results Executing the load test for appV6R2 and plotting it alongside the results for appV6R1 and Desired rates, we have the following results for Get Reports and Bulk Upsert. Get Reports rates We can see that appV6R2 has better rates than appV6R1 throughout the test, but it’s still not reaching the top rate of 250 reports per second. Figure 9. Graph showing the rates of appV6R1 and appV6R2 when executing the load test for Get Reports functionality. appV6R2 has better rates than appV6R1, but without reaching the desired rates. Get Reports latency As shown in the rates graph, appV6R2 consistently provides lower latency than appV6R1 throughout the test. Figure 10. Graph showing the latency of appV6R1 and appV6R2 when executing the load test for Get Reports functionality. appV6R2 has lower latency than appV6R1. Bulk Upsert rates Both versions exhibit very similar rate values throughout the test, with appV6R2 performing slightly better than appV6R1 in the final 20 minutes, yet still failing to reach the desired rate. Figure 11. Graph showing the rates of appV6R1 and appV6R2 when executing the load test for Bulk Upsert functionality. appV6R2 has better rates than appV6R1, almost reaching the desired rates. Bulk Upsert latency Although appV6R2 had better rate values than appV6R1, their latency performance is not conclusive, with appV6R2 being superior in the first and final quarters and appV6R1 in the second and third quarters. Figure 12. Graph showing the latency of appV6R1 and appV6R2 when executing the load test for Bulk Upsert functionality. Both versions have similar latencies. Performance summary The two "maybes" from the previous Issues and Improvements made up for their promises, and we got the best performance for appV6R2 when comparing to appV6R1. This is the redemption of the Computed Pattern applied on a document level. This revision is one of my favorites because it shows that the same optimization on very similar applications can lead to different results. In our case, the difference was caused by the application being very bottlenecked by the disk throughput. Issues and improvements Let's tackle the last improvement on an application level. Those paying close attention to the application versions may have already questioned it. In every Get Reports section, we have "To fulfill the Get Reports operation, five aggregation pipelines are required, one for each date interval." Do we really need to run five aggregation pipelines to generate the reports document? Isn't there a way to calculate everything in just one operation? The answer is yes, there is. The reports documents are composed of fields oneYear, threeYears, fiveYears, sevenYears, and tenYears, where each one was generated by its respective aggregation pipeline until now. Generating the reports this way is a waste of processing power because we are doing some part of the calculation multiple times. For example, to calculate the status totals for tenYears, we will also have to calculate the status totals for the other fields, as from a date range perspective, they are all contained in the tenYears date range. So, for our next application revision, we'll condense the Get Reports five aggregation pipelines into one, avoiding wasting processing power on repeated calculation. Application version 6 revision 3 (appV6R3): Getting everything at once As discussed in the previous Issues and Improvements section, in this revision, we'll improve the performance of our application by changing the Get Reports functionality to generate the reports document using only one aggregation pipeline instead of five. The rationale behind this improvement is that when we generate the tenYears totals, we have also calculated the other totals, oneYear, threeYears, fiveYears, and sevenYears. As an example, when we request to Get Reports with the key ...0001 with the date 2022-01-01, the totals will be calculated with the following date range: oneYear: from 2021-01-01 to 2022-01-01 threeYears: from 2020-01-01 to 2022-01-01 fiveYears: from 2018-01-01 to 2022-01-01 sevenYears: from 2016-01-01 to 2022-01-01 tenYear: from 2013-01-01 to 2022-01-01 As we can see from the list above, the date range for tenYears encompasses all the other date ranges. Although we successfully implemented the Computed Pattern in the previous revision, appV6R2, achieving better results than appV6R1, we will not use it as a base for this revision. There were two reasons for that: Based on the results of our previous implementation of the Computed Pattern on a document level, from appV5R3 to appV5R4, I didn't expect it to get better results. Implementing Get Reports to retrieve the reports document through a single aggregation pipeline, utilizing pre-computed field totals generated by the Computed Pattern would require significant effort. By the time of the latest versions of this series, I just wanted to finish it. So, this revision will be built based on the appV6R1. Schema The application implementation presented above would have the following TypeScript document schema denominated SchemaV6R0: export type SchemaV6R0 = { _id: Buffer; items: Record< string, { a?: number; n?: number; p?: number; r?: number; } >; }; Bulk upsert Based on the specifications, the following bulk updateOne operation is used for each event generated by the application: const YYYYMMDD = getYYYYMMDD(event.date); // Extract the year(YYYY), month(MM), and day(DD) from the `event.date` const operation = { updateOne: { filter: { _id: buildId(event.key, event.date) }, // key + year + quarter update: { $inc: { [`items.${YYYYMMDD}.a`]: event.approved, [`items.${YYYYMMDD}.n`]: event.noFunds, [`items.${YYYYMMDD}.p`]: event.pending, [`items.${YYYYMMDD}.r`]: event.rejected, }, }, upsert: true, }, }; This updateOne has almost exactly the same logic as the one for appV6R1. The difference is that the name of the fields in the items document will be created based on year, month, and day (YYYYMMDD) instead of just month and day (MMDD). This change was made to reduce the complexity of the aggregation pipeline of the Get Reports. Get reports To fulfill the Get Reports operation, one aggregation pipeline is required: const pipeline = [ { $match: docsFromKeyBetweenDate }, { $addFields: buildTotalsField }, { $group: groupCountTotals }, { $project: format }, ]; This aggregation operation has a similar logic to the one in appV6R1, with the only differences being the implementation in the $addFields stage. { $addFields: buildTotalsField } It follows a similar logic to the previous revision, where we first convert the items document into an array using $objectToArray, and then use the reduce function to iterate over the array, accumulating the status. The difference lies in the initial value and the logic of the reduce function. The initial value in this case is an object/document with one field for each of the report date ranges. These fields for each report date range are also an object/document, with their fields being the possible status set to zero, as this is the initial value. The logic in this case checks the date range of the item and increments the totals accordingly. If the item isInOneYearDateRange(...), it is also in all the other date ranges: three, five, seven, and 10 years. If the item isInThreeYearsDateRange(...), it is also in all the other wide date ranges, five, seven, and 10 years. The following JavaScript code is logic equivalent to the real aggregation pipeline code. Senior developers could make the argument that this implementation could be less verbose or more optimized. However, due to how MongoDB aggregation pipeline operators are specified, this is how it was implemented. const itemsArray = Object.entries(items); // Convert the object to an array of [key, value] const totals = itemsArray.reduce( (totals, [YYYYMMDD, status]) => { const [YYYY] = YYYYMMDD.slice(0, 4).toString(); // Get year const [MM] = YYYYMMDD.slice(4, 6).toString(); // Get month const [DD] = YYYYMMDD.slice(6, 8).toString(); // Get day let statusDate = new Date(`${YYYY}-${MM}-${DD}`); if isInOneYearDateRange(statusDate) { totals.oneYear = incrementTotals(totals.oneYear, status); totals.threeYears = incrementTotals(totals.threeYears, status); totals.fiveYears = incrementTotals(totals.fiveYears, status); totals.sevenYears = incrementTotals(totals.sevenYears, status); totals.tenYears = incrementTotals(totals.tenYears, status); } else if isInThreeYearsDateRange(statusDate) { totals.threeYears = incrementTotals(totals.threeYears, status); totals.fiveYears = incrementTotals(totals.fiveYears, status); totals.sevenYears = incrementTotals(totals.sevenYears, status); totals.tenYears = incrementTotals(totals.tenYears, status); } else if isInFiveYearsDateRange(statusDate) { totals.fiveYears = incrementTotals(totals.fiveYears, status); totals.sevenYears = incrementTotals(totals.sevenYears, status); totals.tenYears = incrementTotals(totals.tenYears, status); } else if isInSevenYearsDateRange(statusDate) { totals.sevenYears = incrementTotals(totals.sevenYears, status); totals.tenYears = incrementTotals(totals.tenYears, status); } else if isInTenYearsDateRange(statusDate) { totals.tenYears = incrementTotals(totals.tenYears, status); } return totals; }, { oneYear: { a: 0, n: 0, p: 0, r: 0 }, threeYears: { a: 0, n: 0, p: 0, r: 0 }, fiveYears: { a: 0, n: 0, p: 0, r: 0 }, sevenYears: { a: 0, n: 0, p: 0, r: 0 }, tenYears: { a: 0, n: 0, p: 0, r: 0 }, }, ); Indexes No additional indexes are required, maintaining the single _id index approach established in the appV4 implementation. Initial scenario statistics Collection statistics To evaluate the performance of appV6R3, we inserted 500 million event documents into the collection using the schema and Bulk Upsert function described earlier. For comparison, the tables below also include statistics from previous comparable application versions: table, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 5px; } Collection Documents Data Size Document Size Storage Size Indexes Index Size appV6R1 33,429,366 8.19GB 264B 2.34GB 1 1.22GB appV6R2 33,429,207 9.11GB 293B 2.8GB 1 1.26GB appV6R3 33,429,694 9.53GB 307B 2.56GB 1 1.19GB Event statistics To evaluate the storage efficiency per event, the Event Statistics are calculated by dividing the total data size and index size by the 500 million events. table, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 5px; } Collection Data Size/Events Index Size/Events Total Size/Events appV6R1 17.6B 2.6B 20.2B appV6R2 19.6B 2.7B 22.3B appV6R3 20.5B 2.6B 23.1B Because we are adding the year (YYYY) information in the name of each items document field, we got a 16.3% increase in storage size when compared to appV6R1 and a 4.8% increase in storage size when compared to appV6R2. This increase in storage size may be compensated by the gains in the Get Reports function, as we saw when going from appV6R1 to appV6R2. Load test results Executing the load test for appV6R3 and plotting it alongside the results for appV6R2, we have the following results for Get Reports and Bulk Upsert. Get Reports rate We achieved a significant improvement by transitioning from appV6R2 to appV6R3. For the first time, the application successfully reached all the desired rates in a single phase. Figure 13. Graph showing the rates of appV6R2 and appV6R3 when executing the load test for Get Reports functionality. appV6R3 has better rates than appV6R2, but without reaching the desired rates. Get Reports latency The latency saw significant improvements, with the peak value reduced by 71% in the first phase, 67% in the second phase, 47% in the third phase, and 30% in the fourth phase. Figure 14. Graph showing the latency of appV6R2 and appV6R3 when executing the load test for Get Reports functionality. appV6R3 has lower latency than appV6R2. Bulk Upsert rate As had happened in the previous version, the application was able to reach all the desired rates. Figure 15. Graph showing the rates of appV6R2 and appV6R3 when executing the load test for Bulk Upsert functionality. appV6R3 has better rates than appV6R2, and reaches the desired rates. Bulk Upsert latency Here, we have one of the most significant gains in this series: The latency has decreased from seconds to milliseconds. We went from a peak of 1.8 seconds to 250ms in the first phase, from 2.3 seconds to 400ms in the second phase, from 2 seconds to 600ms in the third phase, and from 2.2 seconds to 800ms in the fourth phase. Figure 16. Graph showing the latency of appV6R2 and appV6R3 when executing the load test for Bulk Upsert functionality. appV6R3 has lower latency than appV6R2. Issues and improvements The main bottleneck in our MongoDB server is still the disk throughput. As mentioned in the previous Issues and Improvements, this was the application-level improvement. How can we further optimize on our current hardware? If we take a closer look at the MongoDB documentation , we'll find out that by default, it uses block compression with the snappy compression library for all collections. Before the data is written to disk, it'll be compressed using the snappy library to reduce its size and speed up the writing process. Would it be possible to use a different and more effective compression library to reduce the size of the data even further and, as a consequence, reduce the load on the server's disk? Yes, and in the following application revision, we will use the zstd compression library instead of the default snappy compression library. Application version 6 revision 4 (appV6R4) As discussed in the previous Issues and Improvements section, the performance gains of this version will be provided by changing the algorithm of the collection block compressor . By default, MongoDB uses the snappy , which we will change to zstd to achieve a better compression performance at the expense of more CPU usage. All the schemas, functions, and code from this version are exactly the same as the appV6R3. To create a collection that uses the zstd compression algorithm, the following command can be used. db.createCollection("<collection-name>", { storageEngine: { wiredTiger: { configString: "block_compressor=zstd" } }, }); Schema The application implementation presented above would have the following TypeScript document schema denominated SchemaV6R0: export type SchemaV6R0 = { _id: Buffer; items: Record< string, { a?: number; n?: number; p?: number; r?: number; } >; }; Bulk upsert Based on the specifications, the following bulk updateOne operation is used for each event generated by the application: const YYYYMMDD = getYYYYMMDD(event.date); // Extract the year(YYYY), month(MM), and day(DD) from the `event.date` const operation = { updateOne: { filter: { _id: buildId(event.key, event.date) }, // key + year + quarter update: { $inc: { [`items.${YYYYMMDD}.a`]: event.approved, [`items.${YYYYMMDD}.n`]: event.noFunds, [`items.${YYYYMMDD}.p`]: event.pending, [`items.${YYYYMMDD}.r`]: event.rejected, }, }, upsert: true, }, }; This updateOne is exactly the same logic as the one for appV6R3. Get reports Based on the information​​ presented in the Introduction, we have the following aggregation pipeline to generate the reports document. const pipeline = [ { $match: docsFromKeyBetweenDate }, { $addFields: buildTotalsField }, { $group: groupCountTotals }, { $project: format }, ]; This pipeline is exactly the same logic as the one for appV6R3. Indexes No additional indexes are required, maintaining the single _id index approach established in the appV4 implementation. Initial scenario statistics Collection statistics To evaluate the performance of appV6R4, we inserted 500 million event documents into the collection using the schema and Bulk Upsert function described earlier. For comparison, the tables below also include statistics from previous comparable application versions: table, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 5px; } Collection Documents Data Size Document Size Storage Size Indexes Index size appV6R3 33,429,694 9.53GB 307B 2.56GB 1 1.19GB appV6R4 33,429,372 9.53GB 307B 1.47GB 1 1.34GB Event statistics To evaluate the storage efficiency per event, the Event Statistics are calculated by dividing the total data size and index size by the 500 million events. table, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 5px; } Collection Storage Size/Events Index Size/Events Total Storage Size/Events appV6R3 5.5B 2.6B 8.1B appV6R4 3.2B 2.8B 6.0B Since the application implementation of appV6R4 is the same as appV5R3, the values for Data Size, Document Size, and Index Size remain the same. The difference lies in Storage Size, which represents the Data Size after compression. Going from snappy to zstd decreased the Storage Size a jaw-dropping 43%. Looking at the Event Statistics, there was a reduction of 26% of the storage required to register each event, going from 8.1 bytes to 6 bytes. These considerable reductions in size will probably translate to better performance on this version, as our main bottleneck is disk throughput. Load test results Executing the load test for appV6R4 and plotting it alongside the results for appV6R3, we have the following results for Get Reports and Bulk Upsert. Get Reports rate Although we didn't achieve all the desired rates, we saw a significant improvement from appV6R3 to appV6R4. This revision allowed us to reach the desired rates in the first, second, and third quarters. Figure 17. Graph showing the rates of appV6R3 and appV6R4 when executing the load test for Get Reports functionality. appV6R4 has better rates than appV6R3, but without reaching the desired rates. Get Reports latency The latency also saw significant improvements, with the peak value reduced by 30% in the first phase, 57% in the second phase, 61% in the third phase, and 57% in the fourth phase. Figure 18. Graph showing the latency of appV6R3 and appV6R4 when executing the load test for Get Reports functionality. appV6R4 has lower latency than appV6R3. Bulk Upsert rate As had happened in the previous version, the application was able to reach all the desired rates. Figure 19. Graph showing the rates of appV6R3 and appV6R4 when executing the load test for Bulk Upsert functionality. Both versions reach the desired rates. Bulk Upsert latency Here, we also achieved considerable improvements, with the peak value being reduced by 48% in the first phase, 39% in the second phase, 43% in the third phase, and 47% in the fourth phase. Figure 20. Graph showing the latency of appV6R3 and appV6R4 when executing the load test for Bulk Upsert functionality. appV6R4 has lower latency than appV6R3. Issues and improvements Although this is the final version of the series, there is still room for improvement. For those willing to try them by themselves, here are the ones that I was able to think of: Use the Computed Pattern in the appV6R4. Optimize the aggregation pipeline logic for Get Reports in the appV6R4. Change the zstd compression level from its default value of 6 to a higher value. Conclusion This final part of "The Cost of Not Knowing MongoDB" series has explored the ultimate evolution of MongoDB application optimization, demonstrating how revolutionary design patterns and infrastructure-level improvements can transcend traditional performance boundaries. The journey through appV6R0 to appV6R4 represents the culmination of sophisticated MongoDB development practices, achieving performance levels that seemed impossible with the baseline appV1 implementation. Series transformation summary From foundation to revolution: The complete series showcases a remarkable transformation across three distinct optimization phases. Part 1 (appV1-appV4): Document-level optimizations achieving 51% storage reduction through schema refinement, data type optimization, and strategic indexing. Part 2 (appV5R0-appV5R4): Advanced pattern implementation with the Bucket and Computed Patterns, delivering 89% index size reduction and first-time achievement of target rates. Part 3 (appV6R0-appV6R4): Revolutionary Dynamic Schema Pattern with infrastructure optimization, culminating in sub-second latencies and comprehensive target rate achievement. Performance evolution: The progression reveals exponential improvements across all metrics. Get Reports latency: From 6.5 seconds (appV1) to 200-800ms (appV6R4)—a 92% improvement. Bulk Upsert latency: From 62 seconds (appV1) to 250-800ms (appV6R4)—a 99% improvement. Storage efficiency: From 128.1B per event (appV1) to 6.0B per event (appV6R4)—a 95% reduction. Target rate achievement: From consistent failures to sustained success across all operational phases. Architectural paradigm shifts The Dynamic Schema Pattern revolution: appV6R0 through appV6R4 introduced the most sophisticated MongoDB design pattern explored in this series. The Dynamic Schema Pattern fundamentally redefined data organization by Eliminating array overhead: Replacing MongoDB arrays with computed object structures to minimize storage and processing costs. Single-pipeline optimization: Consolidating five separate aggregation pipelines into one optimized operation, reducing computational overhead by 80%. Infrastructure-level optimization: Implementing zstd compression, achieving 43% additional storage reduction over default snappy compression. Query optimization breakthroughs: The implementation of intelligent date range calculation within aggregation pipelines eliminated redundant operations while maintaining data accuracy. This approach demonstrates senior-level MongoDB development by leveraging advanced aggregation framework capabilities to achieve both performance and maintainability. Critical technical insights Performance bottleneck evolution: Throughout the series, we observed how optimization focus shifted as bottlenecks were resolved Initial phase: Index size and query inefficiency dominated performance. Intermediate phase: Document retrieval count became the limiting factor. Advanced phase: Aggregation pipeline complexity constrained throughput. Final phase: Disk I/O emerged as the ultimate hardware limitation. Pattern application maturity: The series demonstrates the progression from junior to senior MongoDB development practices Junior level: Schema design without understanding indexing implications (appV1) Intermediate level: Applying individual optimization techniques (appV2-appV4) Advanced level: Implementing established MongoDB patterns (appV5RX) Senior level: Creating custom patterns and infrastructure optimization (appV6RX) Production implementation guidelines When to apply each pattern: Based on the comprehensive analysis, the following guidelines emerge for production implementations Document-level optimizations: Essential for all MongoDB applications, providing 40-60% improvement with minimal complexity Bucket Pattern: Optimal for time-series data with 10:1 or greater read-to-write ratios Computed Pattern: Most effective in read-heavy scenarios with predictable aggregation requirements Dynamic Schema Pattern: Reserved for high-performance applications where development complexity trade-offs are justified Infrastructure considerations: The zstd compression implementation in appV6R4 demonstrates that infrastructure-level optimizations can provide substantial benefits (40%+ storage reduction) with minimal application changes. However, these optimizations require careful CPU utilization monitoring and may not be suitable for CPU-constrained environments. The true cost of not knowing MongoDB This series reveals that the "cost" extends far beyond mere performance degradation: Quantifiable impacts: Resource utilization: Up to 20x more storage requirements for equivalent functionality Infrastructure costs: Potentially 10x higher hardware requirements due to inefficient patterns Developer productivity: Months of optimization work that could be avoided with proper initial design Scalability limitations: Fundamental architectural constraints that become exponentially expensive to resolve Hidden complexities: More critically, the series demonstrates that MongoDB's apparent simplicity can mask sophisticated optimization requirements. The transition from appV1 to appV6R4 required a deep understanding of Aggregation framework internals and optimization strategies. Index behavior with different data types and query patterns. Storage engine compression algorithms and trade-offs. Memory management and cache utilization patterns. Final recommendations For development teams: Invest in MongoDB education: The performance differences documented in this series justify substantial training investments. Establish pattern libraries: Codify successful patterns like those demonstrated to prevent anti-pattern adoption. Implement performance testing: Regular load testing reveals optimization opportunities before they become production issues. Plan for iteration: Schema evolution is inevitable; design systems that accommodate architectural improvements. For architectural decisions: Start with fundamentals: Proper indexing and schema design provide the foundation for all subsequent optimizations. Measure before optimizing: Each optimization phase in this series was guided by comprehensive performance measurement. Consider total cost of ownership: The development complexity of advanced patterns must be weighed against performance requirements. Plan infrastructure scaling: Understanding that hardware limitations will eventually constrain software optimizations. Closing reflection The journey from appV1 to appV6R4 demonstrates that MongoDB mastery requires understanding not just the database itself, but the intricate relationships between schema design, query patterns, indexing strategies, aggregation frameworks, and infrastructure capabilities. The 99% performance improvements documented in this series are achievable, but they demand dedication to continuous learning and sophisticated engineering practices. For organizations serious about MongoDB performance, this series provides both a roadmap for optimization and a compelling case for investing in advanced MongoDB expertise. The cost of not knowing MongoDB extends far beyond individual applications—it impacts entire technology strategies and competitive positioning in data-driven markets. The patterns, techniques, and insights presented throughout this three-part series offer a comprehensive foundation for building high-performance MongoDB applications that can scale efficiently while maintaining operational excellence. Most importantly, they demonstrate that with proper knowledge and application, MongoDB can deliver extraordinary performance that justifies its position as a leading database technology for modern applications. Learn more about MongoDB design patterns ! Check out more posts from Artur Costa .

October 9, 2025
Developer Blog

The 10 Skills I Was Missing as a MongoDB User

When I first started using MongoDB, I didn’t have a plan beyond “install it and hope for the best.” I had read about how flexible it was, and it felt like all the developers swore by it, so I figured I’d give it a shot. I spun it up, built my first application, and got a feature working. But I felt like something was missing. It felt clunky. My queries were longer than I expected, and performance wasn’t great; I had the sense that I was fighting with the database instead of working with it. After a few projects like that, I began to wonder if maybe MongoDB wasn’t for me. Looking back now, I can say the problem wasn’t MongoDB, but was somewhere between the keyboard and the chair. It was me. I was carrying over habits from years of working with relational databases, expecting the same rules to apply. If MongoDB’s Skill Badges had existed when I started, I think my learning curve would have been a lot shorter. I had to learn many lessons the hard way, but these new badges cover the skills I had to piece together slowly. Instead of pretending I nailed it from day one, here’s the honest version of how I learned MongoDB, what tripped me up along the way, and how these Skill Badges would have helped. Learning to model the MongoDB way The first thing I got wrong was data modeling. I built my schema like I was still working in SQL– every entity in its own collection, always referencing instead of embedding, and absolutely no data duplication. It felt safe because it was familiar. Then I hit my first complex query. It required data from various collections, and suddenly, I found myself writing a series of queries and stitching them together in my code. It worked, but it was a messy process. When I discovered embedding, it felt like I had found a cheat code. I could put related data together in one single document, query it in one shot, and get better performance. That’s when I made my second mistake. I started embedding everything. At first, it seemed fine. However, my documents grew huge, updates became slower, and I was duplicating data in ways that created consistency issues. That’s when I learned about patterns like Extended References, and more generally, how to choose between embedding and referencing based on access patterns and update frequency. Later, I ran into more specialized needs, such as pre-computing data, embedding a subset of a large dataset into a parent, and tackling schema versioning. Back then, I learned those patterns by trial and error. Now, they’re covered in badges like Relational to Document Model , Schema Design Patterns , and Advanced Schema Patterns . Fixing what I thought was “just a slow query” Even after I got better at modeling, performance issues kept popping up. One collection in particular started slowing down as it grew, and I thought, “I know what to do! I’ll just add some indexes.” I added them everywhere I thought they might help. Nothing improved. It turns out indexes only help if they match your query patterns. The order of fields matters, and whether you cover your query shapes will affect performance. Most importantly, just because you can add an index doesn’t mean that you should be adding it in the first place. The big shift for me was learning to read an explain() plan and see how MongoDB was actually executing my queries. Once I started matching my indexes to my queries, performance went from “ok” to “blazing fast.” Around the same time, I stopped doing all my data transformation in application code. Before, I’d pull in raw data and loop through it to filter, group, and calculate. It was slow, verbose, and easy to break. Learning the aggregation framework completely changed that. I could handle the filtering and grouping right in the database, which made my code cleaner and the queries faster. There was a lot of guesswork in how I created my indexes, but the new Indexing Design Fundamentals covers that now. And when it comes to querying and analyzing data, Fundamentals of Data Transformation is there to help you. Had I had those two skills when I first started, I would’ve saved a lot of time wasted on trial and error. Moving from “it works” to “it works reliably” Early on, my approach to monitoring was simple: wait for something to break, then figure out why. If a performance went down, I’d poke around in logs. If a server stopped responding, I’d turn it off and on again, and hope for the best. It was stressful, and it meant I was always reacting instead of preventing problems. When I learned to use MongoDB’s monitoring tools properly, that changed. I could track latency, replication lag, and memory usage. I set alerts for unusual query patterns. I started seeing small problems before they turned into outages. Performance troubleshooting became more methodical as well. Instead of guessing, I measured. Breaking down queries, checking index use, and looking at server metrics side by side. The fixes were faster and more precise. Reliability was the last piece I got serious about. I used to think a working cluster was a reliable cluster. But reliability also means knowing what happens if a node fails, how quickly failover kicks in, and whether your recovery plan actually works in practice. Those things you can now learn in the Monitoring Tooling , Performance Tools and Techniques, and Cluster Reliability skill badges. If you are looking at deploying and maintaining MongoDB clusters, these skills will teach you what you need to know to make your deployment more resilient. Getting curious about what’s next Once my clusters were stable, I stopped firefighting, and my mindset changed. When you trust your data model, your indexes, your aggregations, and your operations, you get to relax. You can then spend that time on what’s coming next instead of fixing what’s already in production. For me, that means exploring features I wouldn’t have touched earlier, like Atlas Search , gen AI, and Vector Search . Now that the fundamentals are solid, I can experiment without risking stability and bring in new capabilities when a project actually calls for them. What I’d tell my past self If I could go back to when I first installed MongoDB, I’d keep it simple: Focus on data modeling first. A good foundation will save you from most of the problems I ran into. Once you have that, learn indexing and aggregation pipelines. They will make your life much easier when querying. Start monitoring from day one. It will save you a lot of trouble in the long run. Take a moment to educate yourself. You can only learn so much from trial and error. MongoDB offers a myriad of resources and ways to upskill yourself. Once you have established that base, you can explore more advanced topics and uncover the full potential of MongoDB. Features like Vector Search, full-text search with Atlas Search, or advanced schema design patterns are much easier to adopt when you trust your data model and have confidence in your operational setup. MongoDB Skill Badges cover all of these areas and more. They are short, practical, and focused on solving real problems you will face as a developer or DBA, and most of them can be taken over your lunch break. You can browse the full catalog at learn.mongodb.com/skills and pick the one that matches the challenge you are facing today. Keep going from there, and you might be surprised how much more you can get out of the database once you have the right skills in place.

October 2, 2025
Developer Blog

Top Considerations When Choosing a Hybrid Search Solution

Search has evolved. Today, natural language queries have largely replaced simple keyword searches when addressing our information needs. Instead of typing “Peru travel guide” into a search engine, we now ask a large language model (LLM) “Where should I visit in Peru in December during a 10-day trip? Create a travel guide.” Is keyword search no longer useful? While the rise of LLMs and vector search may suggest that traditional keyword search is becoming less prevalent, the future of search actually relies on effectively combining both methods. This is where hybrid search plays a crucial role, blending the precision of traditional text search with the powerful contextual understanding of vector search. Despite advances in vector technology, keyword search still has a lot to contribute and remains essential to meeting current user expectations. The rise of hybrid search By late 2022 and particularly throughout 2023, as vector search saw a surge in popularity (see image 1 below), it quickly became clear that vector embeddings alone were not enough. Even as embedding models continue to improve at retrieval tasks, full-text search will always remain useful for identifying tokens outside the training corpus of an embedding model. That is why users soon began to combine vector search with lexical search, exploring ways to leverage both precision and context-aware retrieval. This shift was driven in large part by the rise of generative AI use cases like retrieval-augmented generation (RAG), where high-quality retrieval is essential. Figure 1. Number of vector search vendors per year and type. As hybrid search matured beyond basic score combination, the main fusion techniques emerged - reciprocal rank fusion (RRF) and relative score fusion (RSF). They offer ways to combine results that do not rely on directly comparable score scales. RRF focuses on ranking position, rewarding documents that consistently appear near the top across different retrieval methods. RSF, on the other hand, works directly with raw scores from different sources of relevance, using normalization to minimize outliers and align modalities effectively at a more granular level than rank alone can provide. Both approaches quickly gained traction and have become standard techniques in the market. How did the market react? The industry realized the need to introduce hybrid search capabilities, which brought different challenges for different types of players. For lexical-first search platforms, the main challenge was to add vector search features and implement the bridging logic with their existing keyword search infrastructure. These vendors understood that the true value of hybrid search emerges when both modalities are independently strong, customizable, and tightly integrated. On the other hand, vector-first search platforms faced the challenge of adding lexical search. Implementing lexical search through traditional inverted indexes was often too costly due to storage differences, increased query complexity, and architectural overhead. Many adopted sparse vectors, which represent keyword importance in a way similar to traditional term-frequency methods used in lexical search. Sparse vectors were key for vector-first databases in enabling a fast integration of lexical capabilities without overhauling the core architecture. Hybrid search soon became table stakes and the industry focus shifted toward improving developer efficiency and simplifying integration. This led to a growing trend of vendors building native hybrid search functions directly into their platforms. By offering out-of-the-box support to combine and manage both search types, the delivery of powerful search experiences was accelerated. As hybrid search became the new baseline, more sophisticated re-ranking approaches emerged. Techniques like cross-encoders, learning-to-rank models, and dynamic scoring profiles began to play a larger role, providing systems with additional alternatives to capture nuanced user intent. These methods complement hybrid search by refining the result order based on deeper semantic understanding. What to choose? Lexical-first or vector-first solutions? Top considerations when choosing a hybrid search solution When choosing how to implement hybrid search, your existing infrastructure plays a major role in the decision. For users working within a vector-first database, leveraging their lexical capabilities without rethinking the architecture is often enough. However, if the lexical search requirements are advanced, commonly the optimal solution is served with a traditional lexical search solution coupled with vector search, like MongoDB. Traditional lexical - or lexical-first - search offers greater flexibility and customization for keyword search, and when combined with vectors, provides a more powerful and accurate hybrid search experience. Figure 2. Vector-first vs Lexical-first systems: Hybrid search evaluation. Indexing strategy is another factor to consider. When setting up hybrid search, users can either keep keyword and vector data in separate indexes or combine them into one. Separate indexes give more freedom to tweak each search type, scale them differently, and experiment with scoring. The compromise is higher complexity, with two pipelines to manage and the need to normalize scores. On the other hand, a combined index is easier to manage, avoids duplicate pipelines, and can be faster since both searches run in a single pass. However, it limits flexibility to what the search engine supports and ties the scaling of keyword and vector search together. The decision is mainly a trade-off between control and simplicity. Lexical-first solutions were built around inverted indexes for keyword retrieval, with vector search added later as a separate component. This often results in hybrid setups that use separate indexes. Vector-first platforms were designed for dense vector search from the start, with keyword search added as a supporting feature. These tend to use a single index for both approaches, making them simpler to manage but sometimes offering less mature keyword capabilities. Lastly, a key aspect to take into account is the implementation style. Solutions with hybrid search functions handle the combination of lexical and vector search natively, removing the need for developers to manually implement it. This reduces development complexity, minimizes potential errors, and ensures that result merging and ranking are optimized by default. Built-in function support streamlines the entire implementation, allowing teams to focus on building features rather than managing infrastructure. In general, lexical-first systems tend to offer stronger keyword capabilities and more flexibility in tuning each search type, while vector-first systems provide a simpler, more unified hybrid experience. The right choice depends on whether you prioritize control and mature lexical features or streamlined management with lower operational overhead. How does MongoDB do it? When vector search emerged, MongoDB added vector search indexes to the existing traditional lexical search indexes. With that, MongoDB evolved into a competitive vector database by providing developers with a unified architecture for building modern applications. The result is an enterprise-ready platform that integrates traditional lexical search indexes and vector search indexes into the core database. MongoDB recently released native hybrid search functions to MongoDB Atlas and as part of a public preview for use with MongoDB Community Edition and MongoDB Enterprise Server deployments. This feature is part of MongoDB’s integrated ecosystem, where developers get an out-of-the-box hybrid search experience to enhance the accuracy of application search and RAG use cases. As a result, instead of managing separate systems for different workloads, MongoDB users benefit from a single platform designed to support both operational and AI-driven use cases. As generative AI and modern applications advance, MongoDB gives organizations a flexible, AI-ready foundation that grows with them. Read our blog to learn more about MongoDB’s new Hybrid Search function. Visit the MongoDB AI Learning Hub to learn more about building AI applications with MongoDB.

September 30, 2025
Developer Blog

Ready to get Started with MongoDB Atlas?

Start Free