Steve Jurczak

12 results

AWS and MongoDB: Partners in Reliable, Resilient Cloud Environments

Security is increasingly critical for application development. While the volume of applications developed, distributed, used, and patched over networks is rapidly expanding, so, too, are cyberattacks and data breaches, many of which happen at the web application layer. As more organizations move to the cloud, it’s imperative for customers to know who’s responsible for what when it comes to security. Understanding these roles and responsibilities is crucial for ensuring cloud workloads remain secure and available. MongoDB and AWS are working together to simplify and strengthen data security for our customers so they can focus on developing great applications and user experiences. For more information on shared responsibility, read the first blog in this series . Shared responsibility in the cloud Back when most IT environments lived on premises, the responsibility of securing the systems and networked devices fell squarely on the owner of the assets — usually the business owner or a managed service provider. Today, with the prevalence of cloud applications, hybrid environments, and pay-as-you-go services, it is often not clear who's responsible for what when it comes to securing those environments, services, and the data they contain. For this reason, the shared responsibility model of cloud security has emerged. Under the shared responsibility model, some security responsibilities fall on the business, some on public cloud providers, and some on the vendors of the cloud services being used. When you deploy a MongoDB Atlas database on AWS, the database is created on infrastructure operated, managed, and controlled by AWS, from the host operating system and virtualization layer down to the physical security of the AWS data centers. MongoDB is responsible for the security and availability of the services we offer — and for everything within the scope of our responsibilities as a SaaS vendor. Customers are responsible for the security of everything above the application layer — accounts, identities, devices, and data — plus the management of the guest operating system, including updates and security patches; associated application software; and the configuration of the AWS-provided security group firewall. (See Figure 1.) Figure 1.   Shared responsibility when using MongoDB Atlas. Strategic partners in data solutions MongoDB Chief Information Security Officer Lena Smart delivered a keynote at AWS re:Inforce , an event where security experts offered tips and best practices for securing workloads in the cloud, and was also interviewed by theCUBE . Smart noted how MongoDB and AWS are working together to enable our joint customers to focus more on business objectives while having the confidence in the cloud services and infrastructure they get from us. "You want to worry less about security so that you can focus on application development, performance, availability, business continuity, data management, and access," Smart said. "As the CISO of MongoDB, these concerns are also my top concerns as we work to better serve our global customer base. And we are very appreciative of the opportunity to do this in lockstep with AWS." Jenny Brinkley, Director, AWS Security, agrees that customers stand to benefit through the shared responsibility model. "The shared responsibility model is a huge reason why more customers are deploying in the cloud," Brinkley said. "AWS, combined with marketplace services like MongoDB Atlas, help relieve the customer's operational burden so they can focus on driving their businesses forward." Smart's appearance at the event is just one example of how MongoDB and AWS are working together to deliver scalable data intelligence solutions for enterprise data in the cloud, reduce risk for cloud-native tools, and enable our joint customers to achieve compliance and protect their sensitive data. Thanks to our strategic partnership, organizations around the globe and across a wide range of industries — from banking and airlines to insurance and e-commerce — are better able to discover, manage, protect, and get more value from their regulated, sensitive, and personal data across their data landscape. MongoDB Atlas is trusted by organizations with highly sensitive workloads because it is secure by default. We're constantly innovating with new, breakthrough technologies, like our industry-first queryable encryption, which allows customers to run rich, expressive queries on fully randomized encrypted data, improving both the development process and the user experience. MongoDB Atlas is designed to be secure by default. Try it for free . MongoDB Atlas (Pay as You Go) is now available in AWS Marketplace — try it today .

August 11, 2022

How MongoDB Protects Against Supply Chain Vulnerabilities

Software supply chain vulnerabilities became national news in late 2020 with the discovery of the Solar Winds cyberattack. A year later, as if to put an exclamation point on the issue, the Log4j security flaw was discovered. Before these incidents, cybersecurity headlines typically focused on ransomware and phishing attacks, and organizations responded by increasing defensive measures, expanding network security beyond the perimeter, and mandating security awareness training. Protecting organizations from supply chain vulnerabilities, however, is a more complex undertaking. Download Supply Chain Security in MongoDB's Software Development Life Cycle Transparency and testing Few organizations have complete transparency into the software supply chain. The software supply chain includes all components — third-party dependencies, open source scripts, contractors, and other miscellaneous components and drivers — directly involved in developing an application. When dealing with a dozen or more vendors, applications, and service providers, it's hard to know all the elements that comprise your organization's software supply chain. As a backend solutions provider with open source roots, MongoDB is keenly aware of the need for security and transparency in the software supply chain. Long before supply chain vulnerabilities became national news, we implemented numerous safeguards to ensure the security of our products throughout the software development life cycle (SDLC). For example, in the planning stage, we look at our software from an attacker's perspective by trying to find ways to bypass authentication and gain unauthorized access. In the sprint stage, we conduct thousands of CPU hours of tests every week, and we run builds on thousands of compute nodes 24/7 on different combinations of every major hardware platform, operating system, and software language. And in the deployment stage, we perform hundreds of hours of automated testing to ensure correctness on every source code commit. We also invite the MongoDB Community and other third parties to submit reports of bugs found in our products, both open source and enterprise packages. Finally, we conduct periodic bug hunts with rewards for community members who contribute by improving a release. Securing third-party software The area that organizations have the least visibility into is perhaps the use of third-party libraries. Almost all applications use software that was written by someone else. According to some industry estimates, third-party libraries make up between 30% and 90% of typical applications. At MongoDB, all third-party libraries are evaluated and vetted by the security team before being incorporated into MongoDB products. We also use security tools to scan source code, identify known security vulnerabilities, and test against government benchmarks like Common Vulnerability and Exposure (CVE) and Common Weakness Enumeration (CWE), as well as private-entity frameworks like the SANS Institute’s list of software vulnerabilities. If we identify a vulnerability, we use the IETF Responsible Vulnerability Disclosure Process to evaluate and mitigate the issue, communicate with our user base, and perform a postmortem assessment. Details are also published to the MongoDB Alerts page along with release notes and a description of fixes. Using SBOMs To encourage even more transparency within the software supply chain, we've been at the forefront of the push for a software bill of materials (SBOM, pronounced “S-Bomb”). A software bill of materials is a list of ingredients used by an application, including all the libraries and components that make up an application, whether they are third-party, commercial off-the-shelf (COTS), or open source. By providing visibility into all of the individual components and dependencies, SBOMs are seen as a critical tool for improving software supply chain security. MongoDB’s CISO, Lena Smart, recently conducted a panel discussion with a handful of cybersecurity experts on the need for SBOMs in the wake of President Joe Biden’s executive order on supply chain security . Vulnerabilities in software will always exist, and the determination of malicious actors means that some of those vulnerabilities will be exploited. MongoDB believes that secure digital experiences start with secure software development. That means having the proper controls in place, continuously probing for weaknesses, and maintaining transparency in the CI/CD pipeline. For more detailed information, download our white paper Supply Chain Security in MongoDB's Software Development Life Cycle .

August 9, 2022

Tools for Implementing Zero Trust Security With MongoDB

The practice of protecting IT environments from unauthorized access used to be centered on perimeter security — the strategy of securing the perimeter but allowing unrestricted access inside it. As users became increasingly mobile and IT assets became increasingly dispersed, however, the notion of a network perimeter became obsolete. That strategy has now been replaced by the concept of zero trust. In a zero trust environment, the perimeter is assumed to have been breached. There are no trusted users, and no user or device gains trust simply because of its physical or network location. Every user, device, and connection must be continually verified and audited. MongoDB offers several tools and features for integrating our products into a zero trust environment, including: Security by default Multiple forms of authentication TLS and SSL encryption X.509 security certificates Role-based access control (RBAC) Database authentication logs Encryption for data at rest, in flight, and in use For government customers, MongoDB Atlas for Government is FedRAMP-ready. Security by default MongoDB Atlas clusters do not allow for any connectivity to the internet when they’re first spun up. Each dedicated MongoDB Atlas cluster is deployed in a unique virtual private cloud (VPC) configured to prohibit inbound access. (Free and shared clusters do not support VPCs.) The only way to access these clusters is through the MongoDB Atlas interface. Users can configure IP access lists to allow certain addresses to attempt to authenticate to the database. Without being included on such a list, application servers are unable to access the database. Even the person who sets up the clusters needs to add their IP address to the access list. To find out more about the security measures that protect our cloud-based database, MongoDB Atlas, and the rules governing employee access, read our whitepaper, MongoDB: Capabilities for Use in a Zero Trust Environment . Authentication Customers have several options to allow users to authenticate themselves to a database, including a username and password, LDAP proxy authentication, and Kerberos authentication. All forms of MongoDB support transport layer security (TLS) and SCRAM authentication. They are turned on by default and cannot be disabled. Traffic from clients to MongoDB Atlas is authenticated and encrypted in transit, and traffic between a customer’s internally managed MongoDB nodes is also authenticated and encrypted in transit using TLS. For passwordless authentication, MongoDB offers two different options to support the use of X.509 certificates. The first option, called “easy,” auto-generates the certificates needed to authenticate database users. The “advanced” option is for organizations already using X.509 certificates and that already have a certificate management infrastructure. The advanced option can be combined with LDAPS for authorization. Access infrastructure can only be reached via bastion hosts and by users for whom senior management has approved backend access. These hosts require multifactor authentication and are configured to require SSH keys — not passwords. Logging and auditing MongoDB supports a wide variety of auditing strategies, making it easier to monitor your zero trust environment to ensure that it remains in force and encompasses your database. Administrators can configure MongoDB to log all actions or apply filters to capture only specific events, users, or roles. Role-based auditing lets you log and report activities by specific role, such as userAdmin or dbAdmin, coupled with any roles inherited by each user, rather than having to extract activity for each individual administrator. This approach makes it easier for organizations to enforce end-to-end operational control and maintain the insight necessary for compliance and reporting. The audit log can be written to multiple destinations in a variety of formats, such as to the console and syslog (in JSON) and to a file (JSON or BSON). It can then be loaded to MongoDB and analyzed to identify relevant events. Encryption MongoDB also lets you encrypt data in flight, at rest, or even, with field-level encryption and queryable encryption , in use. For data in motion, all versions of MongoDB support TLS and SSL encryption. For data at rest, MongoDB supports AES-256 encryption, and it can also be configured for FIPS compliance. To encrypt data when it is in use, MongoDB offers client-side field-level encryption , which can be implemented to safeguard data even from database administrators and vendors who otherwise would have access to it. Securing data with client-side field-level encryption allows you to move to managed services in the cloud with greater confidence. The database only works with encrypted fields, and organizations control their own encryption keys, rather than having the database provider manage them. This additional layer of security enforces an even more fine-grained separation of duties between those who use the database and those who administer and manage it. MongoDB Atlas exclusively offers queryable encryption, which allows customers to run rich expressive queries on fully randomized encrypted data with efficiency, improving both the development process and user experience. Organizations are able to protect their business by confidently storing sensitive data and meeting compliance requirements. Zero trust and MongoDB MongoDB is optimally suited for use within a zero trust environment. MongoDB is secure by default and has developed industry-leading capabilities in key areas such as access, authorization, and encryption. Used together, these features help protect the database from outside attackers and internal users who otherwise could gain an unauthorized level of access. For more detailed information about security features in MongoDB, read our whitepaper, MongoDB: Capabilities for Use in a Zero Trust Environment .

August 2, 2022

Rockets, Rock ’n’ Roll, and Relational Databases — a Look Back at the Year of RDBMS

When you reach a certain age, you’d rather not be reminded of how old you are on your birthday. At 52 years old this summer, the relational database management system (RDBMS) has reached that point. But also at 52, it’s close to that other stage in life, the one in which you no longer care what others say about you. We’re talking things like: “It’s overly rigid and doesn't adapt to change very well.” “Queries aren’t fast enough to support my application’s needs.” “They’re prone to contention.” Happy 52nd birthday, RDBMS Let’s put things in perspective. The relational database was invented as close to World War I as it was to 2022. In fact, most developers using relational databases today weren’t even born when Edgar F. Codd, an English computer scientist working for IBM, published his paper " A Relational Model of Data for Large Shared Data Banks " in June 1970. At a time when computer calculations cost hundreds of dollars, Codd’s radical model offered a highly efficient method for expressing queries and extracting information. Once Codd’s relational model was implemented, it allowed unprecedented flexibility to work with data sets in new ways. His innovation laid a foundation for database theory that would dominate the next 40 years, culminating in today’s multi-billion-dollar database market. As a service to the developer community — and to commemorate 52 years as the workhorse database — we present other events in 1970, the year the relational database was born. Turning over a new leaf Relational databases were a huge leap forward when they were first conceived. Although there are many use cases that are still a good fit for relational databases, modern apps consist of smaller, modular microservices , each with unique query patterns, data modeling requirements, and scale requirements . Being able to model data according to the exact query patterns of an app is a huge benefit, which is where MongoDB Atlas comes in. MongoDB Atlas stores data in documents using a binary form of JavaScript Object Notation (JSON). Documents provide an intuitive and natural way to model data that is closely aligned with the objects developers work with in code. Rather than spreading out a record across multiple columns and tables, each record is stored in a single, hierarchical document. This model accelerates developer productivity, simplifies data access, and, in many cases, eliminates the need for expensive join operations and complex abstraction layers. MongoDB offers online courses for users with RDBMS and SQL knowledge to learn how to map relational databases to MongoDB. You can also set up a cluster and try MongoDB Atlas free .

July 12, 2022

10 Things We Learned at MongoDB World 2022

When you return to a normal routine after a long break, you find out how much you miss your old routine. After hosting MongoDB World remotely for two years, we were happy to get back to seeing people in person — almost 3,000 of them. Here’s a quick rundown of the top 10 things we learned at MongoDB World 2022. 1. Queryable Encryption was a hit How many times have you been to a concert and the opening act winds up being as good as the band you actually went to see? Queryable Encryption was like that at MongoDB World 2022. While a lot of attendees came to learn about MongoDB Atlas Search or Atlas Serverless Databases , they were equally intrigued by the ability to encrypt data in use and perform rich, expressive queries on encrypted data. This groundbreaking innovation is the result of a collaborative effort between Brown University cryptographer Seny Kamara, his longtime collaborator Tarik Moataz, and MongoDB. 2. Developers are in the driver's seat Starting with the opening keynote by MongoDB CEO Dev Ittycheria, MongoDB World reinforced the notion that developers are the key to the future success and productivity for today’s organizations. “Every product we build, every feature we develop, is all geared toward developer productivity,” Ittycheria said. In fact the entire event centered on powerful new tools that are now available in our developer data platform. In the Partner Promenade, dozens of vendors showed how they’re helping developers become faster and more productive. As Søren Bramer Schmidt, chief architect and founder of Prisma, explained, “New generations of developers are much bigger, and we can invest in better tooling for them. It’s an exciting time to be building tools for developers.” As the world increasingly goes digital, developers will be the key to companies’ success. Services, products, and advancements are inherently tied to the ability of developers to quickly build, iterate, and release. 3. Everyone's data is in motion The volume of data moving to the cloud is unprecedented. In a session titled “Connecting Distributed Data to MongoDB With Confluent,” Joseph Morais, cloud partner solutions architect for Confluent , cited a study that predicted 75% of all databases would be on a cloud platform by 2022. MongoDB senior vice president of product management, Andrew Davidson, said, “MongoDB has really broken through with the MongoDB Relational Migrator at the perfect time, since so many enterprises are accelerating their efforts to get off legacy relational databases and legacy on-premises estates to move to MongoDB Atlas.” 4. Public cloud security is not as easy as some people think While scores of businesses are increasing their cloud footprints with new cloud-native services and applications, securing them is becoming increasingly complex. Steve Walsh, senior solutions architect at MongoDB, gave a session titled “Securing Your Application's Data in the Public Cloud” and cited constantly changing cloud deployments and security policies in multi-cloud environments as reasons why security can be three times more complex in a multi-cloud environment. According to an ITRC study that Walsh cited, failure to configure cloud settings properly caused 30% of data breaches in 2021. MongoDB Atlas is designed to be secure by default , which simplifies the process of restricting access to sensitive data. 5. Ray Kurzweil might be even more prescient than he realizes On Day 3 of MongoDB World 2022, best-selling author, pioneering inventor, and futurist Ray Kurzweil delivered a wide-ranging keynote address covering everything from computational power to vaccine trials to life expectancy and literacy rates. In the address, Kurzweil said it was likely that an AI would pass a Turing test by 2029. Just days later, news reports came out about a Google engineer who’d been fired after claiming that an artificial-intelligence chatbot the company developed had become sentient , though the company dismissed the claims. 6. Attendees were eager to try MongoDB It’s easy to assume that everyone who came to MongoDB World was already using it and wanted to know about new features and capabilities. But in the Learn Booth at the event, plenty of visitors weren't using MongoDB at all — they were there to discover and evaluate. In the Ask the Experts booth, roughly one in 10 people asked about how to prepare to migrate to MongoDB. One of the most common questions we heard was, "How do I convert relational schemas to the document model?" We have tools like Relational Migrator to help with that. We also recommend training for developer and ops teams, including our MongoDB for SQL Pros university course and our Developer-Led Training programs to ramp them up on what makes MongoDB different from SQL. 7. Developer friction comes in many forms The opening keynote address and product announcements set the stage for many of the conversations we had over the next few days. We consistently heard from developers about the friction points that we could help eliminate for them, and how reducing developer friction results in real benefits — apps and services get launched that could not have existed otherwise because of the toll that complexity takes on development teams’ bandwidth. Atlas Serverless databases are going to be a big part of getting those new services off the ground because it’s one less thing developers have to worry about. And the MongoDB CLI allows developers to interact with our services using the method they’re familiar with — especially advanced developers who prefer control and speed over a more visual interface. 8. @MarkLovesTech draws the crowds MongoDB CTO Mark Porter was the center of the action at the event. Wherever he went, a crowd would gather, eager to meet, exchange thoughts, and ask questions. His talks during the Builder’s Fest were standing room only. Mark Porter delivers a short talk on scaling and managing teams at MongoDB World 2022. Photo by Eoin Brazil. 9. Every software company needs custom track jackets Our field marketing team knocked it out of the park with the custom track jacket. After MongoDB CEO Dev Ittycheria debuted the jacket during the Day 1 keynote , it immediately became the most desired piece of swag of the show. A few lucky contestants won their own track jackets during the Builder’s Fest. Developers are either highly fashion-conscious or avid joggers. 10. There's no replacement for in-person gatherings For almost three years, we’ve been getting by with remote events and Zoom calls, but we learned at least two more things from MongoDB World 2022: There’s no replacement for real-life, in-person experiences, and remote interactions actually require a different set of skills. “It is not impossible to talk with people on Zoom. But it requires so much more intentionality,” Mark Porter said. “My takeaway from MongoDB World is making sure that in this new hybrid world, we can talk with people! But even on Zoom, we must become much more focused on the intentionality of talking with them because it is so much different."

June 17, 2022

Highlights From MongoDB World 2022, Day 3

As we said on Day 1 , MongoDB World is a developer-focused event. And on Day 3, we really set out to prove it. The day got going with a keynote from best-selling author, pioneering inventor, and futurist Ray Kurzweil. His encyclopedic knowledge covers a wide range of topics and subject areas, and his talk was equally broad and freewheeling, touching on everything from computational power to vaccine trials to life expectancy and literacy rates. Kurzweil’s general viewpoint was overwhelmingly positive. He cited global poverty and literacy rates, per capita income, and the spread of democracy as examples of how the world is steadily becoming a better place to live. Not shy of making predictions, Kurzweil anticipates computational power roughly doubling each year, bringing AI ever closer to emulating human intelligence. In fact, he predicts that some AI systems will be able to pass the Turing test by 2029. And he sees humans eventually connecting directly to AI systems, expanding our emotional and intellectual intelligence far beyond our current state. He refers to this eventuality as the “ singularity ” and with it, human life will be changed forever. Minds were blown, but not so much that the developers in attendance weren’t ready to get down to doing what they love to do: building apps and writing code. Immediately after the keynote, Builder’s Fest kicked into gear in the Partner Promenade. The floor of the Jacob Javits Center was transformed by dozens of pods where MongoDB experts, partners, and customers gave hands-on tutorials showing how their services and applications integrated with the MongoDB developer data platform. Booming over the main sound system was a super-sized, four-person Mario Kart battle royale, where the victors won prizes like a Nintendo Switch. Another pod hosted a Price is Right–style game show, The Database is Right, where contestants drawn from the audience answered trivia questions about MongoDB, document databases, and database functions. Adjacent to the Bob Barker cosplay, MongoDB senior product manager Rob Walters gave an eager audience a live demo of how to configure the MongoDB Connector for Apache Kafka to use MongoDB as a source or a sink. Our Kafka connector enables developers to build robust, reactive data pipelines that stream events between applications and services in real time. Over on the Google Cloud Coding Stage, four developers competed to see who could build the closest version of the Google homepage in 20 minutes — without previewing their work. The blind coding test resulted in some fairly primitive approximations of the real thing, but all four contestants were praised for their high pressure creations. The winner of each round took home a limited edition MongoDB track jacket. MongoDB CTO Mark Porter joined in a number of Builder’s Fest activities, delivered several short talks, and often drew a crowd for impromptu Q&A. At one point he gave a “Chaos Presentation” — an improvised talk guided by randomly selected imagery — about the outages that inevitably occur in the public cloud, despite the exceptionally resilient infrastructures and high service levels. “Mirror image is an illusion,” Porter said. “A laptop is not staging, staging is not production, and production is not production.” Different regions have different hardware and configuration patterns that can build up over time, he said. “Staging has had far more rollbacks than production,” he said. “Find weaknesses in your architecture by doing post-mortems after an outage. Make staging environments reproducible by blowing them away from time to time. By making staging more predictable, over the course of a few years, you can make production more predictable.” In response to an audience question about what’s more important, implementing a culture of committing to rollbacks or automating it, he said, “The culture of rollbacks is what’s important, but at scale — meaning a couple thousand engineers — culture won’t be enough. You’ll need to automate some of it. But make it so rollbacks are not a bad thing.” A few pods over, developer advocate from Prisma , Sabine Adams, gave a talk entitled, “Giving MongoDB Guardrails.” His talk included step-by-step instructions, using the brand new MongoDB Atlas CLI , on how to ensure data consistency by providing an easy-to-read schema and a type-safe database client. First, he set up a MongoDB cluster in the CLI, then he initialized a TypeScript project with Prisma to model the data, and then used the Prisma CLI to create and retrieve some data. The Prisma client provides an API for reading data in MongoDB, including filters, pagination, ordering, and relational queries for embedded documents. If you want more highlights about MongoDB World 2022, read our Day One and Day Two recaps. For all those who attended the event, we’re happy you made it. For anyone who missed it, we hope to see you at next year's event.

June 10, 2022

Highlights From MongoDB World 2022, Day 2

Day Two of MongoDB World 2022 was all about the breakout sessions — more than 80 were on tap for the day. Things kicked off shortly after 8 a.m. with a discussion on empowering women and other underrepresented groups in the workplace, held in the IDEA Lounge . The 9 a.m. slot was packed with 10 sessions that ranged from building a sustainable ecosystem to the principles of data modeling to using Rust to build applications. Steve Westgarth, senior director of engineering at GSK (formerly GlaxoSmithKline) dove into the weighty topic of morality in the digital world and what developers ought to do when the software they build leads to unintended consequences. All too often, there’s immense pressure to release MVPs early — before all potential vulnerabilities have been vetted. Westgarth’s session sprang from a rhetorical question: “Do we as engineers have an ethical and moral responsibility to anticipate unintended consequences and how much personal responsibility should an individual take to ensure ethical management of data?” His discussion answered that with a Yes — developers do have to weigh the risk of unintended consequences, such as data breaches, versus the desire to maximize market opportunity. Westgarth urged developers to ask themselves what the unintended consequences are of the software they have in production, and to raise awareness of these issues in their organizations. A 15-minute lightning talk followed, with a session name that made it a popular draw for fans of worst-case scenarios: “Strange Cases From the Field.” Adam Schwartz, MongoDB director of technical services in EMEA, walked attendees through some especially challenging real-life technical support stories. He gave a detailed account of such curious cases as The Mistaken Hypotheses and The Unsuccessful Mitigations, and shared lessons he learned during years in the trenches as a support specialist. Closing on a positive note, he assured attendees that problem cases are rare, most cases have straightforward solutions, and exceptional cases are always a learning experience. Day One saw Mark Porter announce the MongoDB Relational Migrator , including a live demo of the product. On Day Two, lead product manager Tom Hollander did a deep dive into use cases, justifications, and future capabilities for the tool. MongoDB Relational Migrator imports and analyzes relational database schemas, maps them to an appropriate MongoDB schema, and transforms and migrates the data into MongoDB. Hollander said organizations can experience a 3x to 5x increase in development velocity and up to 70% in cost reductions by migrating away from relational models in favor of a more modern deployment such as MongoDB Atlas . Hollander said he anticipates future capabilities to include continuous replication, Kafka integration, application code generation, schema recommendations, and more. One company thriving in its legacy modernization efforts is Vodafone. The global head of engineering and transformation, Felipe Canedo, described Vodafone’s transition from a traditional telecommunications company to a Telco-as-a-Service (TaaS) provider. At the core of this transition was the creation of a scalable and open platform for the company’s engineers to innovate with complete freedom and flexibility. Canedo said Vodafone chose MongoDB because of its security, cloud-native high availability, support for multi-region and multi-cloud deployments, agile delivery, professional services, and ease of integration. The ultimate goal, Canedo said, was to provide Vodafone engineers with the best software experience possible. Day One also saw MongoDB CPO Sahir Azam announce the general availability of MongoDB Atlas serverless instances . On Day Two, MongoDB advisory solutions architect Carlos Castro gave a live demo of deploying a serverless database. In 15 minutes, starting from the Atlas dashboard, Castro took the audience step-by-step through the process of selecting a cloud provider, spinning up the instance, creating an app service, authentication, and users, and then setting up rules to allow users to access data on the instance. Serverless instances always run the latest version of Atlas, include always-on security, and enable customers to only pay for operations they run. Day Two also featured several discussions with leading experts and MongoDB partners. MongoDB senior vice president, product management, Andrew Davidson hosted a panel with three leaders in the effort to close the Developer Experience Gap : Peggy Rayzis, senior director of developer experience for Apollo GraphQL; Lee Robinson, director of developer relations for Vercel; and Søren Bramer Schmidt, chief architect and founder for Prisma. Rayzis cited Apollo’s supergraph as one way it's helping developers be more productive by unlocking their flow state. “When you’re in that flow state, you’re writing better code, making better decisions, and developing better value for consumers,” she said. Schmidt pointed out how the newest generation of developers stand to benefit the most from the proliferation of developer tools. “New generations of developers are much bigger and we can invest in better tooling for them,” Schmidt said. “It’s an exciting time to be building tools for developers.” Lee emphasized the important role the open source community plays in these tools. “People hear about Vercel through Next.js,” Lee said, “and we invest to give back to the open source community.” As gratifying and fun the first two days of World were, we really have something special in store for Day Three. It kicks off with a final keynote address by best-selling author, pioneering inventor, and futurist Ray Kurzweil. Day Three also features our Builder’s Fest , where even MongoDB CTO Mark Porter is expected to lend his considerable expertise to a few promising projects. With live game shows, chaos presentations, nerd battles and more, MongoDB World 2022 will finish on a high note. Check back tomorrow for more highlights from MongoDB World 2022.

June 9, 2022

Highlights from MongoDB World 2022, Day 1

MongoDB World is back in person at New York’s Jacob Javits Center after a three-year hiatus. Day One featured a jam-packed schedule of educational sessions, live tutorials, customer stories, and product announcements for a crowd of nearly 2,700 developers and IT professionals. The developer-focused conference got off to an early start with breakout sessions beginning at 8 a.m. Three sessions were on tap: an introduction to data modeling with MongoDB, a primer on MongoDB Atlas Search , and a tutorial on getting started with MongoDB Atlas . In that tutorial, MongoDB solution architect Tom Gleitsmann explained how, out of all the challenges developers face on a daily basis, the common denominator is friction. Gleitsmann gave a crisp and informative summary of MongoDB Atlas features that were engineered specifically to reduce the amount of friction developers face, including ease of deployment, security by default, data visualization, the Performance Advisor , alerts, and backup scheduling, to name a few. The early-morning sessions were followed by a keynote delivered by MongoDB CEO Dev Ittycheria and Chief Product Officer Sahir Azam celebrating the company’s rapid growth, setting out a vision for its future, and highlighting several of its customers. The executives were joined on stage by Vercel founder and CEO Guillermo Rauch, Wells Fargo head of digital enablement Catherine Li, Avalara VP of software engineering John Jemseck, and several MongoDB product experts, each providing insight into the latest enhancements to MongoDB. The biggest reveal, though, was a new vision for MongoDB Atlas and the products that work seamlessly with it, such as Atlas Search and Atlas Data Federation . “We believe that developers want to build on a modern data model that's designed to the way they think and the way they code,” Ittycheria said. “And we also believe that developers want an elegant developer experience that makes their lives so much easier. And they want all this in one unified platform. What they need is a developer data platform.” After the morning keynote, sessions ran back-to-back until lunch. They ranged from quick, 15-minute “chalk talks” to hour-plus deep dives. In one, MongoDB software engineer James Wang gave a hands-on tutorial on using our data visualization tool, MongoDB Atlas Charts , which is fully integrated with MongoDB Atlas. Wang showed how easy it is to link data sources in just a few clicks. Using a fictitious company, he demonstrated step-by-step how to embed data visualization via code snippets and an SDK, share the data with others using a public link, filter data inside the admin web page, and restrict access to authorized users. Attendees followed along on their own laptops and were quickly able to replicate the visualizations. In another talk, Keller Williams’ senior architect Jim McClarty shared some of the real-world impact of Atlas — how it has accelerated the real estate firm’s ability to innovate its applications, how essential Atlas Search is in their applications, and how Charts has become “the best hidden feature in Atlas.” Attendees shuttled from room to room like they had places to go and people to meet, which they did. MongoDB principal, industry solutions, Felix Reichenback took attendees through mobile sync and why developers often waste tons of time trying to build their own sync tool that fails to handle conflict resolution because of the intermittent nature of mobile connections. Next, Michael van der Haven, VP at consulting giant CGI and expert in cloud-native platforms, explained how he helped the energy industry’s open source architecture group, OSDU, migrate away from Elasticsearch, simplify its architecture by removing memory-intensive indexes, and reduce OPEX by six figures using MongoDB Atlas. After lunch, MongoDB CTO Mark Porter gave an energetic keynote, announcing several more new products and features, including the new MongoDB Atlas CLI , the general availability of the Data API , and, perhaps our biggest announcement of the day, Queryable Encryption , which allows users to search their databases while sensitive data stays encrypted. Available in preview, Queryable Encryption offers a big step forward in protecting sensitive data. Porter gave personal anecdotes illustrating many of the hurdles developers have to overcome that have nothing to do with building software, such as rigid and fragile relational databases, and working with SQL, a language that developers early in their careers or fresh out of school have no desire to work with. Porter’s keynote address included a live demo of the Relational Migrator, which, while risky to perform in front of an audience, went off flawlessly. Meanwhile, a series of events kept the IDEA Lounge a lively place, including a great panel discussion called Our Journey: Being Black in Tech. And a floor below the workshops, more than a dozen MongoDB partners demonstrated their platforms and related products — including many of the companies named MongoDB Partners of the Year . The schedule for Day 2 is equally packed, with more than 80 sessions that include partner showcases, strange cases from the field, book club sessions, more deep dives into product announcements and tutorials, and talks on diversity, equity, and inclusion. In the afternoon, MongoDB celebrates Pride with food, drinks, and entertainment at the historic Stonewall Inn. And MongoDB World 2022’s biggest event happens at the end of the day — “The Party,” featuring music from The Midnight and Don Diablo, as well as retro arcade games and an open bar. Check back tomorrow for more highlights from MongoDB World 2022.

June 8, 2022

Closing the Developer Experience Gap: MongoDB World Announcements

Now is a great time to be a software developer or architect. Never have there been so many solutions, vendors, and architectural patterns to choose from as you build new applications and features. But the sheer number of choices creates another puzzle for developers to solve before they can begin to build. Many of MongoDB’s efforts over the past year have been to help address the needs of the developer communities we serve, and one of the greatest needs we’ve seen in developer communities is improving the experience of being a developer. At MongoDB World 2022, we announced several tools to help improve that experience and to boost developer velocity: Atlas Data API — A serverless API that lets you easily access your Atlas data from any environment that supports HTTPS requests, including services like AWS Lambda and Google App Services. The Atlas Data API is fully functional upon generation, language-agnostic, and secure from the start. Serverless instances — With MongoDB serverless instances, developers don’t have to worry about scaling up to meet increasing workloads or paying for resources they’re not using if their workload is idle. The serverless model dynamically uses only what it needs — and only charges for what it uses. Atlas CLI — The MongoDB Atlas CLI is a completely new way to access Atlas in a non-GUI-centered environment. CLIs are often the interaction method of choice by developers, especially advanced developers who prefer control and speed over a more visual interface. Our new CLI gives these developers an easier registration experience with nearly instant free tier deployments in Atlas. Time series — We have expanded our data platform so developers can work more easily with time series data in support of IoT use cases, financial analytics, logistics, and more. MongoDB time series makes it faster and lower cost to build and run time series applications by natively supporting the entire time series data lifecycle. Facets in Atlas Search — Categorize data with facets for fast, filtered search results. With facets in Atlas Search, you can index your data to map fields to categories, then quickly update query results based on the ones relevant to your users. Verified Solutions — The MongoDB Verified Solutions program gives developers the confidence to use third-party tools, such as Mongoose, by guaranteeing comprehensive testing of the tools as well as a base level of support from MongoDB Technical Services. Change streams — Change streams enable developers to build real-time, event-driven applications that react to data changes as they happen. This allows them to build more complex features and better end-user experiences. The paradox of choice for developers Developers today have no shortage of tools to work with, but the abundance of options is itself a problem. And when there’s little or no central decision-making, developers are forced to figure out how to stitch together a patchwork of technology solutions to create the seamless user experiences that consumers have come to expect. Developers had fewer choices when applications were built on a three-tier framework composed of a relational database, a J2EE stack, and an app or web server. Since then, however, application development has fragmented into different architectures, SDKs, and cloud services, leaving developers many more patterns to figure out. On top of that, the rise of DevOps has increased the pressure on developers to build and maintain the tools they’re working with, and serious development shops often take pride in building their own toolchains, backends, and databases. Put it all together — the abundance of choices, the patchwork nature of solutions, the pressure to build and maintain toolchains, and the glue code keeping it all together — and it adds up to more cognitive load, elevated stress levels, and a lengthening of time to value. As Stephen O’Grady from analyst firm RedMonk explains , “Developers are forced to borrow time from writing code and redirect it toward managing the issues associated with highly complex, multifactor developer toolchains held together in places by duct tape and baling wire. This, then, is the developer experience gap.” Having a lot of options is a good thing — until it’s not. One way we’re working to unwind the paradox of choice is by providing tools that exist in the same form whether in the cloud or on the client — that is, solutions that integrate with the way developers already work. This could mean plugging into a CLI first, abstracting provisioning, simplifying and securing the data layer so developers don’t have to worry about it, and unlocking the creativity of developers with a data model that maps to how data is actually going to be used. We’re also enabling developers to access the tools they need from within MongoDB without having to integrate myriad bolt-on tools (i.e., the paradox of choice). Building at velocity The key to unlocking developer productivity, as we see it, is giving developers the building blocks they need to create a whole workload from scratch, or to bring a new workload into their ecosystem — be it time-series, search, or analytics — and have them run on a single platform instead of having to stitch together disparate systems. Our goal is to bring a modern data layer to modern applications. We want to bring that experience to more and more of what you work on. We know that modern applications have complicated data requirements, but that shouldn’t mean complicated data infrastructure. We want to serve most of your workloads with a single unified platform. Learn more about MongoDB World 2022 announcements at mongodb.com/new and in these stories: 5 New Analytics Features to Accelerate Insights and Automate Decision-Making 4 New MongoDB Features to Improve Security and Operations Streamline, Simplify, Accelerate: New MongoDB Features Reduce Complexity

June 7, 2022

Consider Your Cloud Backup Strategy on World Backup Day, March 31

World Backup Day , marked every March 31, reminds all of us of the risk and vulnerability of the data stored on our devices and systems. Data loss happens every day for a variety of reasons. While human error is the most common cause, ransomware is fast becoming the most expensive one. A big reason is because ransomware criminals have moved on from targeting individuals to attacking businesses, where data is far more sensitive and valuable. Businesses have more to lose if their data is compromised or exposed, and they have deeper pockets to pay larger ransoms. Having a backup copy of data can help a business recover clean copies of data after it’s been corrupted by malware, accidentally deleted, or destroyed by a fire or flood. Backup could also save the day during cloud outages. For businesses, any type of data loss is extremely costly. So having a backup plan as part of an overall disaster recovery strategy is critical for the survival of every business. Backup benefits A backup and disaster recovery strategy is necessary to protect your mission critical data against these types of risks. With such a strategy in place, you'll gain peace of mind knowing that if your data ever becomes accidentally deleted or infected by malware, you'll be able to recover it and avoid the cost and consequences of data loss. You'll also satisfy important regulatory and compliance requirements by demonstrating that you've taken a proactive approach towards data safety and business continuity. Taking regular backups offers other advantages as well. The backups can be used to create new environments for development, staging, or QA without impacting production. This practice enables development teams to quickly and easily test new features, accelerating application development and ensuring smooth product launches. Backup and the cloud factor Back when business systems were mostly on-premises, there was a simple framework for disaster recovery planning: the 3-2-1 backup rule. It essentially recommends keeping three copies of your data, in at least two different form factors, with one of them kept at an offsite or remote location. In practice, this might mean creating a bare metal image of an entire server and storing it on a secondary server in the same server room. In addition, you would also make a copy of all of your server data on magnetic tape and transport it to an offsite, secure location. So that’s three copies of data (the original, the bare metal image, and the tape backup) in two different form factors (disc and tape), with one stored offsite. The 3-2-1 backup rule has worked well for a lot of businesses for decades. But systems have changed. First, barely anyone uses tape anymore. The time it takes to transport a tape backup to where it can be used for disaster recovery is more than most businesses can tolerate. The criticality of systems has shrunk recovery time objectives (RTO) to just hours or minutes. Tape backups are simply not practical for most modern IT environments. But the cloud is. Hyperscale cloud providers can provide service levels that are just as reliable, if not moreso, than traditional on-premises environments. But customers shouldn’t make the mistake of thinking that by deploying in the cloud, they’re off the hook when it comes to planning and implementing a disaster recovery strategy. Businesses can and do lose data in the cloud. And hyperscale cloud providers do experience outages. Businesses must have a backup strategy, but it needs to be updated to factor in the cloud workloads that have become ubiquitous. Cross-cloud data protection An outage at any of the major cloud providers could take your databases offline. Keeping extra backup copies with different cloud providers is a good way to ensure you can still access your data in the event of an outage with a single cloud provider. Recent outages at hyperscale cloud providers have underscored the need for cross-cloud backup. “Most cloud outages are related to software bugs rather than physical catastrophes,” says Chris Shum, Atlas product lead at MongoDB. “We've always protected ourselves against physical catastrophes by distributing across data centers or regions, but no one really protects themselves against the software bug.” Shum says by backing up workloads on different clouds, you could tolerate a cloud provider going down and your database and your application would still be up. Getting around the cloud backup skills gap Data gravity keeps many businesses locked into a single cloud provider. Becoming fluent with a particular cloud provider is a skill to acquire just like anything else. And once you become comfortable with one provider, its hardware configurations, operational tools, and its offerings, it can be hard to venture out to a different provider with different hardware and pricing. But developing flexibility in the cloud is critical if you hope to leverage the best features, functionality, and cost efficiencies from each cloud provider. MongoDB has made it possible to do just that. “With Atlas, we’ve made it our mission to abstract as much of the management away as possible,” Shum says. “It’s all available as a fully managed service. So things like hardware asymmetry between different cloud providers, offerings being different, prices being different, how you’d set up networking infrastructure, all of those things that to you as a consumer might be different, we’ve abstracted it away for you. And because of the abstraction, you are then free to move nodes to whichever cloud provider or region you want.” Data protection with Atlas MongoDB Atlas provides point in time recovery of replica sets and cluster-wide snapshots of sharded clusters. It’s simple to restore to precisely the moment you need, quickly and safely. Backups can be restored automatically to the existing MongoDB Atlas cluster or downloaded to be manually archived or restored on a different infrastructure. MongoDB Atlas provides: Security features to protect access to your data Built in replication for always-on availability, tolerating complete data center failure Backups and point in time recovery to protect against data corruption Fine-grained monitoring to let you know when to scale — additional instances can be provisioned with the push of a button Automated patching and one-click upgrades for new major versions of the database, enabling you to take advantage of the latest and greatest MongoDB features A choice of cloud providers, regions, and billing options Backup wrap-up If you’re still depending on a single cloud provider to keep your critical workloads online and available, consider World Backup Day as your reminder to identify and close the gaps in your cloud disaster recovery strategy. For information on how to back up and restore cluster data in MongoDB read this article in our documentation.

March 31, 2022

DocumentDB, MongoDB and the Real-World Effects of Compatibility

If there’s confusion in the market for document databases, it probably has to do with how the products are marketed. AWS claims that DocumentDB, its document model database, comes “with MongoDB compatibility.” But the question of how compatible DocumentDB actually is with MongoDB is worth considering. DocumentDB merely emulates the MongoDB API while running on top of AWS’s cloud-based relational database, Amazon Aurora. And it’s an inconsistent imitator at best, because it fails 62% of MongoDB API correctness tests . Even though AWS claims compatibility with MongoDB 4.0, our tests have concluded that its emulator is a mishmash of features going back to MongoDB 3.2, which we released in 2015. The result is that DocumentDB lacks many of the features that come standard in MongoDB. We’ve already published a side-by-side comparison of the feature sets for each solution. Instead of covering the same ground here, we'll explain how some of those differences play out in real-world scenarios. DocumentDB vs. MongoDB head-to-head comparison Scaling writes, partitioning data, and sharding Native sharding enables you to scale out databases horizontally, across multiple nodes and regions. Atlas offers elastic vertical and horizontal scaling to smooth consumption. DocumentDB does not scale writes or partition data beyond a single node. In order to ensure consistency, MongoDB uses concurrency control measures to prevent multiple clients from modifying the same piece of data simultaneously. Replicate and scale beyond a single region A number of factors are driving the need to distribute workloads to different geographic regions. In some cases, it’s to reduce latency by putting data closer to where it’s being used. In other cases, it’s to store data in a specific geographic zone to help meet data localization requirements. Finally, there’s the need to ensure the availability of data when there’s an outage of an entire AWS region. The flexibility to replicate and move workloads as needed is increasingly seen as a business requirement. But by default DocumentDB limits you to just 15 replicas and constrains you to a single region. Newly introduced Global Clusters may look like an answer, but much like “MongoDB compatibility,” it’s potentially misleading. The Global Clusters feature more closely resembles multi-region replication since it only allows writes to single primaries instead of being able to write to multiple regions. It also requires manual reconfiguration to recover from failures, making it a partial solution, at best. MongoDB Atlas allows true global cluster configurations so you can deliver capabilities to all your users around the world. At a click of a button, you can place the most relevant data near local application servers across more than 80 global regions to ensure low-latency reads and writes. By being able to define a geographic location for each document, your teams are able to more easily meet local privacy and compliance measures. It’s also an insurance policy against being locked into a single public cloud provider. High resilience, rapid failover, retryable writes For critical applications, every second of downtime represents a loss of revenue, trust, and reputation. Rapid failover to a different geographic area is necessary when recovery time objectives (RTO) are measured in seconds. DocumentDB failover SLAs can be as high as two minutes, and multi-region failover is not available. With MongoDB, failover time is typically five seconds, and failover to a different region or cloud provider are also options. Write errors can be as costly as downtime. If a write to increment a field is duplicated because a dropped connection failed to notify the client that the write was executed, that extra increment can be very costly depending on what it represents. With retryable writes, a write can be sent multiple times but applied exactly once. MongoDB has retryable writes. DocumentDB doesn’t. Integrated text search, geospatial processing, graph traversals Integrated text search saves time and improves performance because you can run queries across multiple sources. With DocumentDB, data must be replicated to adjacent AWS services, which increases cost and complexity. MongoDB Atlas combines integrated text search, graph traversals, and geospatial processing features into a single API and platform. Integrated search with MongoDB Atlas helps drive end user behavior by serving up relevant results based on what users are looking for or what businesses want to direct them toward. Hedged reads Geographically distributed replica sets can also be used to scale read operations and intelligently route queries to the replica set that’s closest to the user. Hedged reads is a function that automatically routes queries to the two closest nodes (measured by ping distance), returning results from the fastest replica. This helps minimize situations where queries are waiting on a node that’s already busy. DocumentDB doesn’t offer hedged reads, and it’s more restricted in terms of the number of replica sets it allows and the ability to place workloads in different regions. MongoDB gives you more flexibility when distributing data geographically for hedged reads since it leverages all of the major public cloud providers. Online Archive Putting data in cold storage can be a death knell if accessing it again is too cumbersome or slow. With online archiving, you can tier data across fully managed databases and cloud object storage and query it through a single endpoint. Online archiving automatically archives historical data while reducing operational and transactional data storage costs without compromising on query performance. MongoDB has it. DocumentDB doesn’t. Integrated querying in the cloud Running separate queries for separate data stores can drain resources and slow queries. The best solution is being able to query and analyze data across all the different databases and storage containers at once. You can do this with integrated querying, where you run a single query to analyze live cloud data and historical data together and in-place for faster insights. With DocumentDB, you have to replicate data to adjacent AWS services. With MongoDB, you can query and analyze data across cloud datastores and MongoDB Atlas in its native format. You can also run powerful, easy-to-understand aggregations through a unified API for a consistent experience across data types. On-demand materialized views When you create aggregations, the results are usually put into a new collection every time you create it. The entire collection is regenerated each time you create the aggregation. This process consumes CPU and I/O. With the $merge stage, you can just update the generated results collection rather than rebuild it completely. $merge lets you incrementally update the collection every time you run it. To update it, all you need to do is run the aggregation again and it will update all the values in place. $merge gives you the ability to create collections based on an aggregation and update those collections efficiently. This functionality allows users to create on-demand materialized views, where the content of the output collection is incrementally updated when the pipeline is run. MongoDB has this capability. DocumentDB does not. Rich data types The decimal data type is critical for storing very large or small numbers, like financial and tax computations, where it’s necessary to emulate decimal rounding exactly. DocumentDB does not support decimal data types or, in turn, lossless processing of complex numeric data, which is a problem for financial and scientific applications. MongoDB does support rich data types like Decimal128, giving you 128 bits of high precision decimal representation. Client-side field-level encryption Client-side field-level encryption (FLE) reduces the risk of unauthorized access or disclosure of sensitive data, like personally identifiable information (PII) and protected health information (PHI). Fields are encrypted before they leave the application, which protects data while in transit over the network, in database memory, at-rest in storage, in backup repositories, and in system logs. DocumentDB does not offer client-side FLE. MongoDB’s client-side FLE provides among the strongest levels of data privacy and security for regulated workloads. Platform agility In addition to the feature sets described here, one of the biggest differences between DocumentDB and MongoDB is the degree of freedom you have to move between different platforms. AWS offers seamless movement and minimal friction between services within its own ecosystem. MongoDB makes it easy to replicate data or move workloads to any cloud provider, giving you complete flexibility within the AWS platform as well as outside of it — whether it’s a self-managed MongoDB instance on cloud infrastructure, a full on-premises deployment, or just a local development instance on an engineer’s laptop. Try MongoDB Atlas for free today!

July 16, 2021

2021 Payment Trends Guide: What Corporate Clients Want From Their Bank

Payments data monetization is becoming the new battleground for financial services organizations and their corporate clients. Processing of payments data has evolved from a back-end operation to a critical space for innovation. Banks are determined to get more value out of payments data while corporate clients are willing to work with any partner that can help them do it. As one senior executive at a global bank commented: “(Data monetization) would have been very costly to do on a mainframe. The newer technologies like cloud allow you to do this in a much more efficient and effective way.” A recent survey of bank executives conducted by Celent, sponsored by MongoDB and Icon Solutions, revealed that payments data monetization is a high priority for key technology decision makers. Download the complete report here . What is payments data monetization? Payment data includes transaction records as well as data contained within messages. The monetization of payment data is relevant for a number of use cases, including: Using payments data to improve internal operations, identify clerical errors, and optimize procurement processes Improving straight-through-processing rate, which is the percentage of transactions that are passed through the system without manual intervention Incentivizing customers to make payments at different times to optimize the liquidity position of the bank Using payments data to enhance existing corporate-facing services Tracking payments and forecasting cash balances more accurately What's new in the payments industry? The time to invest in payments data monetization is now, according to a survey of hundreds of senior bank executives, corporate treasurers, and CFOs. Banks are eager to monetize their payments data, particularly as real-time payments accelerate and the push by the various regulators around the globe to adopt ISO 20022 intensifies. FedNow in the U.S. is the first true attempt to modernize the American payment landscape. At the same time, high demand for data-led services is prompting corporate clients to look beyond their existing banking partners to access new and alternative payment services. More than half of corporate clients surveyed relied on a partner other than their lead bank for cash forecasting (62%) and cash visibility (56%). The hope among banking professionals is that corporate clients would be willing to pay for service improvements and significant value-add services. But where banks see value-add features, clients see obligatory services their banking partners should offer by default. There are, however, opportunities for banks to justify additional fees, including: Using payment data to support new propositions and business models Driving value-added services through enriched third-party data sets Partnering with other organizations to launch new offerings Which services will corporate clients pay for? Corporate clients are seeking a wide range of payment services and are clear about the services they’re willing to pay for, including real-time cash balances and forecasting, better security and fraud protection, and data consolidated into a single dashboard. Services like virtual accounts, automated tracking, reconciliation of receivables, and better integration with corporate workflows are also considered high value but are seen as features that shouldn’t carry added fees, according to corporate clients. Their most sought-after services include: Consolidated real-time data from multiple banks in a single dashboard (38%) Real-time cash forecasting (37%) Better security and fraud protection (36%) Real-time cash balances (35%) Find out more about which services corporate clients are willing to pay for The push for ISO 20022 standardization The payments industry and corporate clients are in agreement about the need to adopt ISO 20022 message formats in its native JSON representation, which will allow richer data to be sent across the network and increase rates of automation without further adjustments to legacy data models in relational databases. According to the survey, banks plan to make significant investments in supporting ISO 20022 so they can offer improved services to corporate clients; 74% of banks see ISO 20022 migration as an investment opportunity in new data-led services. For their part, 32% of corporate clients want help managing ISO 20022 changes from their banking partners, and 31% said they would switch providers if it meant receiving help supporting ISO 20022 compliance. Cloud and agility Perhaps no sector of the market is more attached to monolithic legacy applications as the banking sector. But if banks wish to differentiate themselves through innovation, they’ll need to leverage modern data architectures to address customer needs with greater agility. Cloud technologies will be essential in the push to adopt modern database design because they offer the ability to create more flexible and responsive applications and microservices with on-demand scalability. According to the survey, banks have the opportunity to unlock long-term gains by combining data assets and integrating payments data into an enterprise-wide data strategy. As the survey points out, “Data monetization is a strategy, not a product.” Consolidated payment data and single-view dashboard The most sought-after payments service by finance executives — cited by 38% of corporate clients, 53% of which boast revenue of $500 million to $1 billion — is a single dashboard that provides a consolidated view of real-time data across all corporate bank partners. Although technology solutions for this service exist, implementation lags. Banks should embrace this critical need as an opportunity to differentiate themselves from competitors. Find out how MongoDB brings data agility to payment workflows

June 15, 2021