WEBINARHow to build smarter AI apps with Python and MongoDB. Register now >
NEWLearn MongoDB with expert tutorials and tips on our new Developer YouTube channel. Subscribe >
Blog home
arrow-left

Evaluation of Update-Heavy Workloads With PostgreSQL JSONB and MongoDB BSON

January 26, 2026 | Updated: February 13, 2026 ・ 5 min read

JSON has become a common data format for modern applications, and as a result, many teams evaluate whether a single database can serve both relational and document-style workloads. PostgreSQL’s JSONB support and MongoDB’s BSON document model often appear comparable at a glance, leading to the assumption that they can be used interchangeably.

However, while both systems expose similar query and indexing capabilities, their internal storage models and execution paths differ in important ways. These differences are not immediately visible through APIs or simple benchmarks, but they begin to surface under realistic workloads involving frequent updates, indexing, and concurrency.

This article explores those differences through a series of controlled experiments, focusing on how each database stores JSON-like data, how common operations are executed, and how these design choices influence performance and resource utilization over time.

What is JSONB? 

JSONB is a PostgreSQL data type that enables applications to store JSON-like objects natively within a relational table, alongside traditional columns. Internally, JSONB stores data in a binary format that enables efficient parsing, indexing, and querying compared to plain JSON text. This capability has made PostgreSQL an attractive option for teams looking to work with semistructured data without moving away from a relational database.

However, an important question arises when JSONB is used beyond simple storage and querying: How does PostgreSQL actually process JSONB internally? We can separate this into two more specific questions: Does PostgreSQL flatten JSONB documents into individual values during execution, or does it operate on the document as a whole? And if the document is not flattened, what are the performance implications of this design under real application workloads?

These questions become especially relevant when PostgreSQL is evaluated as part of a modernization strategy, often driven by developer familiarity and the convenience of using JSONB as a substitute for a document database. While JSONB offers flexibility and expressive query capabilities, it is not immediately clear whether this flexibility translates into equivalent performance characteristics when compared to systems designed natively around document storage.

This article explores these questions by examining how PostgreSQL stores and processes JSONB, and how those internal behaviors influence performance under common workloads. The goal is not to compare features but to understand the practical consequences of using JSONB as a foundational data model in modern applications.

What is BSON?

BSON, or binary JSON, is more than a serialization format used for storing data on disk. It is a foundational representation that MongoDB’s query engine, update operators, and storage engine are designed around. While BSON appears similar to JSON at a superficial level, its binary structure enables users to address fields and arrays to be addressed by position rather than treat them as single serialized object.

MongoDB’s default storage engine, WiredTiger, works closely with this representation by managing data at a fine-grained binary level. Each field in a BSON document is stored with explicit type and length information, enabling the engine to locate and modify specific fields efficiently. This design enables MongoDB to apply update operations such as $set directly to targeted fields without requiring the entire document to be read, reconstructed, and rewritten in memory.

As a result, updates can be performed with significantly lower memory and I/O amplification compared to systems that treat documents as monolithic values. The impact of this design becomes especially visible under update-heavy workloads, which this document explores through a proof of concept.

Proof of concept: An update-heavy workload 

To understand how PostgreSQL JSONB and MongoDB BSON behave under sustained update pressure, I designed a controlled load test that simulates a common application scenario: frequent updates to existing JSON documents.

In this scenario, the application continuously updates documents stored as JSONB in PostgreSQL and BSON in MongoDB. The objective is to observe how each database handles update-only workloads over an extended period, particularly with respect to latency, throughput, and system stability.

The same logical data model and update pattern were used across both systems. The tests were executed with the following configuration:

Test configuration

  • Concurrency: 256 concurrent users

  • Test duration: 30 minutes

  • Workload type: Update-only

  • Total existing documents: ~13 million

PostgreSQL environment

  • Deployment: AWS RDS PostgreSQL

  • Instance type: m5.xlarge

  • Storage format: JSONB

MongoDB environment

  • Deployment: MongoDB Atlas

  • Cluster tier: M40 (4vCPU , 16 GB RAM) - Auto Scale Up M50 (8vCPU , 32 GB RAM)

  • Storage format: BSON

Results and Observations 

This is based on a synthetic workload, and the results may vary for specific use cases. The machine sizes represent classic application behavior.

The following analyzes the observed behavior of both systems under the synthetic workload and highlights the architectural factors that influence their performance characteristics.

  1. Under identical client concurrency and workload conditions, MongoDB sustained a consistently high rate of update operations throughout the duration of the test. PostgreSQL, in comparison, processed less volume of updates during the same time window.

  2. As the test progressed, the divergence in throughput became more noticeable. MongoDB maintained steady performance characteristics under sustained write pressure, while PostgreSQL’s throughput showed signs of gradual decline over time.

  3. These observations reflect differences in how each system handles sustained update-intensive workloads. The results should be interpreted within the broader context of workload design, configuration choices, and architectural trade-offs, as both databases are optimized for different operational models and use cases. I understand that this document has framed the reason really well for the above observations.Correlating CPU utilization with p99 latency

Correlating CPU utilization with Data Format

PostgreSQL Analysis

Under the JSONB update-focused workload, PostgreSQL reached very high CPU utilization levels during the test. Rather than translating directly into higher throughput, this state coincided with increased application latency and reduced update rates over time.

This behavior reflects PostgreSQL’s architectural model for handling JSONB updates, which has been mentioned in this document as well. When JSONB values are modified, particularly when indexed or larger fields are involved, the database may need to rewrite the full value and create new row versions under its MVCC model. In some cases, HOT (Heap-Only Tuple) optimizations cannot be applied, which can result in additional index maintenance, WAL generation, and accumulation of dead tuples. As workload intensity increases with all data in a JSONB field, and your initial performance metrics might be met, but as your application grows and more indexes are added, critical use cases may struggle to scale.

These characteristics are consistent with PostgreSQL’s design, where flexibility and transactional guarantees are prioritized, and performance under heavy JSONB update workloads depends significantly on schema design, indexing strategy, and maintenance tuning.

MongoDB Analysis

Under the same update-focused workload, MongoDB exhibited a different resource utilization pattern. CPU usage remained steady and moderate throughout the test period, while latency remained stable and bounded. Throughput showed no visible degradation as the workload progressed.

MongoDB’s document model allows targeted field-level updates using operators such as $set, leveraging BSON’s field-addressable structure. This enables updates to modify specific fields without requiring full document reconstruction in typical cases. As the load increased, CPU utilization scaled in proportion to completed update operations, and tail latency remained controlled.

This reflects MongoDB’s design emphasis on document-oriented workloads, where frequent field-level updates are common. Its performance characteristics in this scenario illustrate how architectural choices influence behavior under sustained update pressure.

Conclusion

MongoDB’s document model is designed around field-level update semantics, where operators such as $set allow precise modification of specific attributes within a document. This design can perform efficiently in workloads characterized by frequent partial updates.

PostgreSQL’s JSONB data type similarly provides functionality such as jsonb_set, enabling targeted updates within JSON structures. From a feature perspective, both systems support fine-grained modification of document-style data.

However, the underlying storage architectures differ. PostgreSQL’s tuple-based MVCC model requires new row versions to be created on update, and depending on indexing and data layout, this can introduce additional overhead in heavy write or update-intensive scenarios. Under certain workloads, this architectural characteristic may become a bottleneck if not carefully tuned. MongoDB’s document-oriented storage model handles updates differently, which can influence performance behavior under sustained write pressure.

Ultimately, both databases provide mechanisms to manage JSON-style data effectively. Observed performance differences stem less from feature availability and more from architectural trade-offs in how updates are physically managed on disk and in memory.

MongoDB Resources
Atlas Learning Hub|Customer Case Studies|AI Learning Hub|Documentation|MongoDB University