MongoDB Applied

Customer stories, use cases and experience

MongoDB and Clarity Business Solutions: Enabling Modernization for Public Sector Clients

Cloud-based transformation is now a must-have for federal agencies. And, in partnership with Clarity Business Solutions, MongoDB is making that transformation easier for government agencies, particularly those that work in closed, air-gapped environments and that require security clearances for their support staff. In this article, we’ll look at specific ways MongoDB and Clarity Business Solutions are working together to support public sector clients. Cloud challenges IT teams within government agencies want many of the same cloud benefits that their colleagues in the private sector enjoy: better performance, the ability to outsource the management of their infrastructure, and a path to building more resilient applications. And government leaders, at the national level, are pushing for more cloud adoption. A May 2021 executive order from President Joe Biden called on all federal agencies to “accelerate movement to secure cloud services.” In the mission to support U.S. troops, Danielle Metz, the U.S. Department of Defense’s Deputy CIO for Information Enterprise, said , “It all comes down to harnessing the power of cloud compute and then being able to natively build applications continuously and often in that space.” For government agencies, security is an overriding concern. However, some of the precautions that enable the highest levels of security also make it more difficult to keep applications up to date, to modernize them, and to move them to the cloud. Government agencies often work in closed networks, without access to the internet. The need for security clearances makes it difficult for agencies to take advantage of support from software companies, consultants, and other members of the technology ecosystem. Even personnel with appropriate security clearances aren’t always allowed to go on-site to assist their government clients. These extra layers of security can also make it difficult for MongoDB to support public sector clients that require top secret clearances. Now, we’re pleased to announce that our ability to service these clients has been enhanced through our partnership with Clarity Business Solutions , a software and systems engineering company that focuses on data analytics, processing, and data flow. Clarity specializes in working with the federal government and understands the constraints under which government agencies operate — as well as the requisition procedures and security protocols unique to the federal government. The company has experience working in closed, air-gapped environments, and all but two of Clarity’s employees hold security clearances. Joint solutions In the first phase of our partnership, MongoDB and Clarity are jointly offering three unique solutions to better support our public sector clients: Application modernization Trusted Tier Support Rapid start Let’s look at each of these solutions in turn. Application modernization Together, Clarity and MongoDB offer public sector clients a proven, iterative approach to application modernization. Clarity’s security clearances allow them to sit side-by-side with public sector clients when necessary, and Clarity has deep experience with the environments common to government agencies. Clarity and MongoDB leverage a strategic process for analyzing legacy applications and modernizing and migrating them iteratively, rather than trying to update an entire system in a “big bang” approach. This iterative approach allows clients to modernize without downtime. Legacy and modernized systems can run in parallel for a period of time, enabling troubleshooting and increasing confidence. Clarity and MongoDB combine the power of the MongoDB application data platform with Clarity’s extensive client domain knowledge. This partnership allows teams to focus on the application feature development and quickly get the data platform operational. Public sector clients often operate systems with significant accumulated technical debt. Clarity and MongoDB are partnering to help clients increase efficiency, improve performance and scalability, and optimize maintenance as each monolithic application is modernized, rather than waiting years for an entire system to be replaced. Trusted tier support MongoDB and Clarity Business Solutions are offering a concierge support service specifically for the public sector. Trusted Tier Support engages U.S.-only technical staff, with appropriate clearances, to provide phone, online, or even on-site support for MongoDB government customers. Trusted Tier Support provides continuity between call-in support and support offered by individuals with on-site clearance. Clarity TrustedTier Support engineers are tightly integrated with the MongoDB support team and can rely on the expertise of the broader MongoDB engineering organization while ensuring that all necessary details remain confidential. Service-level agreements are twice the MongoDB published response times for commercial support. Rapid start This new service helps public sector clients get operational with MongoDB as quickly and efficiently as possible. This intense, short engagement ensures the following: Networking layouts are optimized and secured using appropriate firewall rules and TLS to encrypt all data in transit. Data storage is set up to meet applications’ needs. Backup and recovery are properly enabled. Agencies have the proper guidance to achieve environment security requirements. For example, data at rest is encrypted according to client requirements through disk encryption and/or MongoDB’s encrypted storage engine. Data in use can be protected with client-side field-level encryption. Additionally, Clarity engineers can consult and provide input on schema design, leveraging key MongoDB features and working with training staff on the best practices for using MongoDB. We believe these three new offerings will significantly ease the way for our government clients, enabling them to make the best use of MongoDB and cloud technologies and to better serve their end customer — all of us. Learn more about Clarity Business Solutions .

June 23, 2022
Applied

How Telcos Are Transforming to Digital Services Providers

The telecommunications industry is in the midst of a digital revolution, shifting from a traditional service delivery model to one that is increasingly customer-centric and that extends beyond the provision of traditional connectivity services to include diverse digital services. Telcos undergoing this modernization journey are digital services–focused first, offering apps, streaming services, retail platforms, peer-to-peer payment platforms, and more. As telcos delve into the complex 5G, IoT, and AI technologies powering personalized and real-time user experiences, pressure is increasing on aging networks and business support system (BSS) infrastructures. MongoDB customers like TIM and Telefónica are using the MongoDB Atlas developer data platform to deliver a robust platform-focused experience that complements existing technologies. Through an integrated modernization approach, telcos are improving both customer and developer experiences, building innovative new applications. In a recent roundtable discussion , Boris Bialek , MongoDB global head of industry & solutions, sat down with telco IT leaders Paolo Bazzica , head of digital solutions at Italy’s TIM, and Carlos Carazo , global CTO of Spain’s Telefónica Tech IoT and Big Data division. This article provides an overview of the discussion and insights into how platform thinking is invigorating telco IT teams. From communications services providers to digital services providers The shifting value chain in telecommunications. Source: Kearney The shift and expansion from traditional communications services to a comprehensive digital services suite requires global telecommunications companies to rethink their monetization strategies. Even before the pandemic, an evolution was well underway for telecommunications providers. From 2010 to 2020, overall revenue coming from connectivity services grew by only 2%, according to research compiled by Kearney. During the same period, digital services experienced a five-fold increase. Although telecommunications providers successfully sparked a revolution that grew into a $6.3 trillion digital economy, only those capitalizing on digital services reaped the benefits. In 2020, digital services like e-commerce and online advertising surged, capturing nearly 80% of growth. Leveraging platform thinking As network operators evolve to digital service providers, the idea of platform thinking is rippling across the industry. Network connectivity was tested with the hardships of the March 2020 COVID-19 lockdown in Italy, but TIM’s digital platform project Fly Together , which was initiated in 2018, helped bridge the divide. “People went from their normal lives to a full lockdown in one day. People realized that telco was a key point, because you need to stay at home, but you still need to communicate to work and go to school,” said Bazzica in the virtual roundtable discussion hosted by MongoDB. “Our digital platform was the way to refill or top up your account, and access ebooks and so on, so I think it’s more than just an evolution for the business; it's a different positioning.” Today, customer trust is a key differentiator and essential focus for TIM. People rely on TIM’s services to keep the country going. And TIM continues to modernize the digital experiences of its customers through the Fly Together platform. “From my perspective, this is definitely a trend, and I think it’s the evolutionary stalwart of the digital life of the people to be relevant and continue to be their trusted partner,” Bazzica said. A similar dynamic led to the creation of Telefónica Tech two years ago, a division of Spain’s Telefónica SA, according to Carazo. The new business is split into two units: one dedicated to offering cloud or cybersecurity solutions and the other offering IoT or big data digital services, which are the services customers need to pursue their own digital transformations. “We are strongly convinced that connectivity is the basis for any new digital economy, so we are really proud to offer connectivity for these customers,” Carazo said. At the center of Telefónica Tech’s transformation is its Kite Platform , run on MongoDB, which is a managed connectivity platform running close to 30 million IoT devices all over the world. The platform provides connectivity, but it goes beyond IoT connectivity and provides multidimensional benefits across all IoT environments from the devices to the product connecting the clouds. This is the foundational component of Telefónica Tech’s portfolio, which delivers new business use cases across industries. Modernizing applications and evolving to microservices and APIs How can a telco simplify this complex journey to modernization? For TIM, the change was driven by a desire to modernize 700 different applications before effectively going into the digital business. TIM launched Fly Together to build a digital layer that serves the scalability and latency needed to transform customers’ digital service experiences. Before, a customer could be querying up to 14 systems, depending on which apps were open. Without the digital experience layer, you can’t express an SLA or determine how long it takes to open an app, according to Bazzica. The first task of Fly Together was to build the layer that decoupled the backend systems from the model that helps run TIM’s digital channels. Through its work with MongoDB over the past four years, TIM launched a resilient platform that doesn’t require exotic hardware to run efficiently. Because the platform was developed in a cloud-native environment, it comprises containerized microservices and RESTful APIs, setting a new standard for the company’s development of applications. “We are able to modernize, but gradually. We still have our mainframe running,” Bazzica said. “The real experience is seeing the company learning and experimenting. That’s another value with this type of technology; we can try a lot of different things with minimum effort and make big discoveries.” Four digital services trends to watch IoT is driving many exciting use cases for Telefónica Tech’s new business division. Within the B2B sector, there is healthy growth across four key industry use cases, according to Telefónica’s Carazo. Connected Industry and IoT — Telefónica starts with providing private network solutions. These technologies are expected to evolve to more complex use cases like robotics and predictive maintenance in small and medium factories within the next five years. Smart metering — Massive growth is expected in smart metering, which uses electronic devices to measure energy consumption. The implementation of this trend could spur demand for millions of connected devices. Connected cars — This sector is expected to grow significantly in the next five to 10 years as operators deploy new digital services like infotainment, security, and safety applications. Smart cities — Cities around the world are seeking services for their digital citizens looking to live in more sustainable and flexible communities. These use cases are critical to building modern cities, societies, and industries. Platform thinking and an integrated approach to modernization will help telcos create modern applications, extending their businesses beyond conventional services to include novel digital services. Watch our webinar to learn more about TIM and Telefónica’s transformation to digital services providers.

June 22, 2022
Applied

Built With MongoDB: Overcoming Employee Burnout Through Pioneera

Everyone can feel burned out from time to time. Working late hours to meet that project deadline, checking your phone on the weekend for any missed Slack messages from coworkers, an endless stream of Zoom calls — workplace stress can add up quickly and does not leave much room for taking care of yourself. With so much on your plate at any given time, it can be hard to pick up on the warning signs of burnout. One Australian startup has made its mission to prevent employees from reaching burnout with software trained to pick up on those warning signs and alert you. The application, Indie , from Australian startup Pioneera , sends personalized notifications in real time, when employees need them the most. Similar to a spellchecker, Indie helps individuals, teams, and companies prevent burnout. Built With MongoDB spoke with our 2022 MongoDB Savvy Startup Innovation Award winner , Danielle Owen Whitford , who founded Pioneera in 2018. Whitford discussed how she came up with the idea, how the software works, and what the future of Pioneera holds. Built With MongoDB: What is Pioneera all about? Danielle Owen Whitford: Pioneera uses early warning indicators to help reduce workplace stress and prevent burnout in a confidential and safe way. And when we see those early warning signs, helping that person get the help they need in real time to reduce their stress, promote wellness, improve productivity — all that good stuff. Essentially we are trying to use technology to prevent mental health issues in the workplace, which are rising at an alarming rate. Where did the names Pioneera and Indie come from? Our mission is to pioneer a new era of work, and it just came out as Pioneera! As for the name Indie, it’s actually named after my daughter. Our first MVP had a different name, and we had some mixed responses to the name. I was part of SheStarts, an accelerator program in Australia, and I was talking to my fellow founders about some of the experiences I had with my daughter and how she courageously called me out on working too hard. They said, “Why don’t you call the bot Indie?” Customers and users loved it, so Indie bot was a keeper. What are some examples of common stress signals that Indie picks up? We assess language, linguistic markers, and behaviors as the three key areas. From a language point of view, we see that there are certain types of words that are used within a workplace context that are exhibitors of stress. For example, when we’re stressed in real life, we say that we’re stressed, but at work we’re more likely to say, “I feel stretched.” And that’s a word that we have built into our scoring system, which we developed with a psychologist. On the positive side, we look for words like achievement and win. We also look for behavior — how we act in the workplace, particularly around our communication systems, because that’s where Indie sits. What made you decide to start Pioneera? I burned myself out in 2016. I spent 20 years in big companies and had a whole range of senior roles, from running retail networks and call centers to large-scale transformation. It’s not like I was hidden away in the organization — I reported into the executive team and was very visible, so we all saw the signs. I loved what I did, but I didn’t see the warning signs. I left because I felt like I had no other options. In hindsight I know that I did, but at the time, I just couldn't see past where I was at. That is a classic sign of burnout. I took a bit of time off. I started looking at my former colleagues and my peers, and I realized this burnout phenomenon was happening everywhere. I used to see emails from my team that said, “Here we go again” and “I don’t want to do this anymore.” My first degree was in psychology and my second was a Masters in Communication, so I instinctively responded to that language. So if I saw it, I would call them up and ask what was going on. My teams didn’t burn out and they always delivered. But clearly nobody had seen that for me, and I missed all the signs myself, so I burned out. I thought, “Wouldn’t it be cool if we could automate that for everyone?” The naivete in me thought that if Microsoft created a spell-checker, surely it can’t be that hard to create a spell-checker for stress. We know the language is there — we’ve all gotten emails from people where we know they’re having a bad day. That’s what I set out to do, to take my 20 years of experience and turn that into how I could help prevent this burnout from happening. I wanted Indie to do for the world what I had done for my colleagues. Burnout happens at an individual level, but the impacts are pretty significant — not just for that person or their family, but also for the workplace and then for society in general. We’re seeing health care costs that are through the roof, and we’re going to see long-term impacts on the next generation in terms of the ability to educate. These serious social issues are something I knew I needed to turn my attention to. In terms of employees, can you explain who has access to what information? We’re obsessed with privacy and confidentiality, and it's built into every part of the product. Everything is done in a way that protects the confidentiality and privacy of the individual. We did a lot of user testing before we started building the product, and users said that they really love this and the fact that their company would buy this for them. So it contributed to an employer value proposition. But, and it is a big but, they did not want their boss to know that they were stressed, because they thought they’d miss out on a project or HR would contact them or something like that. So that feedback has become the core of everything we do. What does the future of Pioneera look like? We’ve really upped our game on patterns of behaviors that indicate action is needed and the tips we provide to encourage action. We have partnered with expert psychologists to deliver content that is evidence- and research-based and proven to work. We’ve revolutionized our user experience to build connection and trust from the first moment a user hears about Indie. We’re looking to scale internationally with our vision for everyone globally to have Indie’s personalized, real-time support. What have you enjoyed the most about building Pioneera? It feels like there’s real meaning to what we’re doing and we’re actually making a difference in people's lives. I’ll get a call from a customer who says they were headed toward burnout and Indie stopped them. Or a call from a team manager delighted that they acted on Indie’s recommendations and their team is thriving. That sort of thing is always delightful to hear. I feel like we’re doing something positive for the world. How has working with MongoDB enabled Pioneera to succeed? The way the database is set up and structured has enabled us to focus on the things that we need to focus on, because we know MongoDB has our back. We’re a technology company building innovative technology, and we need to deliver our product to the market in a scalable, reliable way. You can run the risk of building a great technology, but it’s not actually a product that solves a problem for customers because the product features aren’t delivering value. MongoDB does technology really well, and that’s what we use it for — to make sure we’re delivering great product features and value to the customer today and tomorrow. Learn more about using Pioneera to overcome employee burnout and find out more about our MongoDB for Startups program .

June 16, 2022
Applied

Building a Modern App Stack with Apollo GraphQL and MongoDB Atlas

Delivering new app experiences with legacy architectures is slow and painful. Many organizations invest massive amounts of resources to make their infrastructure more resilient and flexible yet find they’re still not delivering products at the speed they seek. API complexity means that, rather than delivering new experiences, frontend and backend teams must navigate scattered microservices, versioned REST endpoints, and complex database management. This article explains how teams can reduce complexity through the use of Apollo GraphQL and MongoDB Atlas . GraphQL can help teams integrate these scattered REST APIs and microservices into a unified schema that frontend developers can query, fetching only the data required to power an experience while being agnostic to where the data is sourced from. However, running everything through a single GraphQL server (read: monolith) with multiple teams rapidly contributing changes creates a bottleneck. The complexity of the API layer grows exponentially as the number of client devices, applications, and developers increases — and backend teams can no longer work autonomously or push changes on their own releases schedules. To be efficient with GraphQL, developers need: A unified API, so app developers can rapidly create new experiences A modular API layer, so each team can independently own their slice of the graph A seamless, high-performance data layer that scales alongside API consumption An app stack that delivers A supergraph is a GraphQL API designed to benefit frontend and backend teams simultaneously. It’s a unified API layer built with Apollo Federation , which is a declarative, modular GraphQL architecture. Unlike a monolithic schema, a supergraph is composed of smaller graphs called subgraphs, each with their own schema. Teams can evolve their subgraphs independently, and their changes will be automatically rolled into the overall supergraph, allowing them to deliver autonomously and incrementally. However, the efficiency of a supergraph depends on the capabilities and reliability of the underlying data layer. MongoDB Atlas — MongoDB’s fully managed developer data platform — comes with that promise. It offers a flexible document model that gives developers an intuitive way to work with GraphQL’s nested data structure, while providing a reliable data layer that can run anywhere, be deployed across multiple regions and cloud providers, and scale horizontally due to its distributed nature. Together, a supergraph and MongoDB Atlas create a composable app stack that eliminates complexity and empowers teams to innovate faster than ever before. Figure 1: Simplify app architecture with a composable supergraph and unified data access layer using Apollo Federation and MongoDB Atlas App dev experience When crafting a new app experience, developers will want to browse a unified schema, create queries that fetch exactly the data needed, measure API performance, and use the API in minutes instead of dedicating days or weeks trying to find the right API to stitch into each web, Android, iOS, tablet, and watch app individually. However, when apps have to use lots of REST APIs directly, the developer experience and end-user performance suffers. According to PayPal , UI developers were spending less than one-third of their time actually building UI. The remainder of that time was spent figuring out where and how to fetch data, filtering/mapping over that data, and orchestrating API calls. With a supergraph , developers can query a single GraphQL endpoint for all the data they need and discover, consume, and optimize without having to navigate a sea of REST APIs and microservices. A key characteristic of a principled GraphQL API is an abstract, demand-oriented schema , which provides the data needed to power the customer experience and abstracts the microservices and data layer underneath. The most powerful graphs serve as a facade on top of existing microservices by abstracting the lower-level backend domain models into a curated customer experience model that provides the high-level information displayed in the UI. This experience model allows for a consistent UX across web, mobile, and wearable apps. API dev experience Backend developers want the freedom to build and evolve services and capabilities autonomously. But this is a tall order when clients are simultaneously consuming services. It’s nearly impossible to refactor without introducing breaking changes and harder still to understand what the impact of those breaking changes will be. The result is that almost any change to the API requires coordination with all the client teams. With a supergraph and a flexible data layer behind it, teams can deliver changes independently to modular subgraphs that compose into the overall supergraph. Apollo Federation’s declarative architecture and powerful directives keep teams working autonomously without breaking clients. Choosing the right graph-native data layer Building a scalable supergraph starts with choosing the right data layer to power backend services. In the past, relational databases required ORMs or manual mapping of the underlying relational format to an object/document structure that apps could use, such as JSON. An impedance mismatch between what the database provided and what client apps needed resulted in performance and maintenance issues that slowed down app development and app performance. In contrast to relational databases, MongoDB’s document model and GraphQL share a simple nested data structure, which means developers can easily use them together without having to map GraphQL to relational data and define relationships. The added composability of Apollo Federation lets developers easily federate across multiple collections or databases, between single and multi-cloud Atlas clusters running in different regions, and even between Atlas and on-premises clusters. In this way, developers gain the flexibility of MongoDB’s document model and the freedom to iterate on their GraphQL schema with safety and confidence ensured by automated schema checks . Choosing the right subgraph architecture When it comes to choosing how to connect the subgraphs to the data layer, a few options are available: Traditional subgraph (microservices plus database) In many environments, there are years’ worth of existing microservices, REST APIs, and SOA services in production. Subgraphs ( written in any of 20-plus languages and frameworks ) can be added as a new layer on top of these existing microservices and composed into an experience-driven supergraph that serves as a ViewModel backend to power new app experiences for web, mobile, and wearable devices. This is a highly effective and proven model. Graph-native subgraph (direct to MongoDB) When new subgraphs are added in greenfield environments or to add net-new capabilities, the subgraphs can be designed to talk directly with the database without microservices or REST APIs in the middle. This approach isn’t always the right answer, especially for companies that have standardized on REST or gRPC in the backend. However, it is a simpler setup that can improve performance by removing a layer. Traditional subgraph (microservices plus MongoDB Atlas) MongoDB Atlas is a fully managed, multi-cloud, multi-region data layer for traditional microservices. With options such as the official MongoDB Drivers for 16 languages, a fully managed HTTPS–based Data API, or community managed ODMs such as Mongoose, developers have a range of options to build their supergraph's data layer with Atlas. Developers get the flexibility of choosing a path that provides them with an idiomatic and familiar way to work with the database in the language and development style that they are most familiar with. MongoDB Atlas GraphQL API (hosted subgraph API) MongoDB Atlas’s GraphQL API is automatically generated based on the underlying database document schema and can be directly composed as part of a supergraph. Developers who choose this approach can reduce the amount of time spent writing custom GraphQL resolvers, as these are automatically generated by MongoDB Atlas. When the Mongo document model closely matches the query shape — a paradigm that is common within document databases such as MongoDB — the queries can be served without transformation or mapping. This setup also applies to relationships between different types of documents in different collections; thus, the generated GraphQL schema will also allow devs to query collections that other teams may own in the same graph. If developers’ desired query shape differs from the underlying document model, such as when shaping schemas in a Server Driven UI (SDUI) pattern, they can leverage the @requires schema directive to pull in and transform multiple document fields into an experience-oriented property tuned for rendering by frontend apps. In this way, devs can benefit from both efficient data access and custom model mapping when needed. Figure 2: Composing a supergraph with Apollo Federation, custom built resolvers for MongoDB, and the hosted MongoDB Atlas GraphQL API endpoint Expand business use-cases with subgraphs Supergraphs make it easy to compose microservices, but when it comes to hosting, managing, and storing the data that performs the business logic, the MongoDB Atlas Application Data Platform can help teams build their app requirements faster. Need a search bar? The same data stored in an Atlas Cluster can be search-indexed and use Atlas Search to perform full-text search operations without additional setup or syncing data to another search technology. Want to embed graphs and charts? A time series collection can make it easy to query large chunks of data by timestamp, and MongoDB Atlas Charts lets devs use the same MongoDB database to build these inside applications. Other services, like custom Data APIs and data federation, help ensure that data can be queried and stored in the way that best fits a team’s needs. Focused on scale Engineering teams need to be able to anticipate both current and future needs. MongoDB Atlas delivers an application data platform that spans multiple regions, clouds, and deployment types to solve the data challenges of transactional workloads, modern apps, and microservices. Self-healing clusters ensure that developers are not scrambling to diagnose issues with their data nodes, and multi-region and multi-cloud deployments provide automatic failover for both models, respectively. Together, Apollo and MongoDB are committed to providing developers with the effective tools they need to simplify their architecture, improve app performance, ship faster, and grow their businesses. Register for Atlas today Learn more about the supergraph on the Apollo blog .

June 13, 2022
Applied

5 New Analytics Features to Accelerate Insights and Automate Decision-Making

The applications we use every day are continually delivering richer experiences and working more efficiently. One of the driving forces of this progress is analytics. As organizations ingest and use ever increasing layers of data, they are able to derive more timely insights about their users’ preferences, patterns, and needs to deliver just-in-time information and choices within their applications. The next generation of applications will take a huge leap in intelligence by integrating real-time analytics into their app experiences. Such analytics will increasingly be automated, developer-driven, and incorporated seamlessly within data platforms alongside transactional — or application workloads. As announced at MongoDB World 2022 , MongoDB will introduce five new features this year that will help businesses modernize their analytics: Column Store indexes, MongoDB Atlas Data Federation , MongoDB Atlas Data Lake , MongoDB Atlas SQL Interface , and distinct tiering for analytics nodes. Using these features will automate decision-making and drastically decrease the time it takes to get application insights in front of users. Modernizing analytics around operational data Today, in order to create dynamic in-app experiences, businesses need to take multiple steps — collecting application data, sending it to a data warehouse or data lake to run analytics on it, deriving insights, coding new experiences, and releasing the app back to users. Modern applications must be able to automate this process by capturing and processing the data at the source — that is, in the application. The data inside your application is the most valuable and current picture of what is happening with your business. Combining real-time, operational, and embedded analytics, analytics driven by application data helps determine, influence, and automate decision-making for the app and provide real-time insights for the user. Real-time analytics is, as the name implies, done nearly instantly, usually on data that resides in an application. Examples include fraud detection for banks and personalized offers or recommendations on an e-commerce site. The analytics can range from basic aggregations to machine learning models that provide insight and automate an action, such as sending an offer. One example is Ticketek , an Australia-based event ticketing company, which uses real-time analytics to make critical decisions, such as whether to open up more sections of a venue or put on more shows. Operational analytics is the process of finding insights from your data sources to improve decision-making for the daily operations of a business. Use cases include real-time reporting, improving overall operations, and product analytics. Online grocery Boxed , for example, was able to manage inventory levels during peak demand thanks to real-time data and insights directly from MongoDB Atlas . Embedded analytics enhances applications by embedding data visualizations and dashboards with MongoDB Atlas Charts , providing users with relevant insights when and where they need them. What's New Here are five advances announced at MongoDB World that can help businesses modernize their analytics: Column Store indexes: This feature enhances analytical queries by allowing developers to deliver real-time analytics on live, operational data. It also improves the performance of common analytical queries by adding a structure on top of collections that groups similar fields together to speed up reads. This eliminates the need to offload analytics to disparate specialized systems and rely on complex and fragile ETL pipelines that ultimately slow down the time to gain insights. Atlas Data Federation : Atlas Data Lake is relaunching as Atlas Data Federation to reflect our focus on the value of federation. MongoDB Atlas users have the ability to query several data sources at once. Atlas Data Lake : The new Atlas Data Lake provides a cost-effective data store optimized for high-performance analytics on large volumes of data. Atlas Data Lake delivers analytical workload isolation, allowing you to perform complex, long-running, or large analytical queries without impacting your production application. Fully integrated as part of the MongoDB Atlas, Atlas Data Lake can be provisioned alongside your Atlas Database, making the ingestion and optimization of data simple, with no infrastructure to set up or manage. Atlas SQL Interface, Connectors, and Drivers : Atlas’s new SQL capabilities allow people who mainly work in SQL tools, such as data analysts, to easily interact with Atlas data. Users can query Atlas data via a BI tool or SQL driver and are able to directly query live data and gain enhanced schema control. Distinct tiering for analytics nodes: Users can choose an appropriately sized node tier dedicated to their analytics workload without needing to change the tier of the entire cluster. This can enhance the performance of your analytics workloads; you can provision only what you need if your analytical workload requirements are less than your transactional requirements. Learn more about MongoDB World 2022 announcements at mongodb.com/new and in these stories: 4 New MongoDB Features to Improve Security and Operations Closing the Developer Experience Gap: MongoDB World Announcements Streamline, Simplify, Accelerate: New MongoDB Features Reduce Complexity

June 7, 2022
Applied

Closing the Developer Experience Gap: MongoDB World Announcements

Now is a great time to be a software developer or architect. Never have there been so many solutions, vendors, and architectural patterns to choose from as you build new applications and features. But the sheer number of choices creates another puzzle for developers to solve before they can begin to build. Many of MongoDB’s efforts over the past year have been to help address the needs of the developer communities we serve, and one of the greatest needs we’ve seen in developer communities is improving the experience of being a developer. At MongoDB World 2022, we announced several tools to help improve that experience and to boost developer velocity: Atlas Data API — A serverless API that lets you easily access your Atlas data from any environment that supports HTTPS requests, including services like AWS Lambda and Google App Services. The Atlas Data API is fully functional upon generation, language-agnostic, and secure from the start. Serverless instances — With MongoDB serverless instances, developers don’t have to worry about scaling up to meet increasing workloads or paying for resources they’re not using if their workload is idle. The serverless model dynamically uses only what it needs — and only charges for what it uses. Atlas CLI — The MongoDB Atlas CLI is a completely new way to access Atlas in a non-GUI-centered environment. CLIs are often the interaction method of choice by developers, especially advanced developers who prefer control and speed over a more visual interface. Our new CLI gives these developers an easier registration experience with nearly instant free tier deployments in Atlas. Time series — We have expanded our data platform so developers can work more easily with time series data in support of IoT use cases, financial analytics, logistics, and more. MongoDB time series makes it faster and lower cost to build and run time series applications by natively supporting the entire time series data lifecycle. Facets in Atlas Search — Categorize data with facets for fast, filtered search results. With facets in Atlas Search, you can index your data to map fields to categories, then quickly update query results based on the ones relevant to your users. Verified Solutions — The MongoDB Verified Solutions program gives developers the confidence to use third-party tools, such as Mongoose, by guaranteeing comprehensive testing of the tools as well as a base level of support from MongoDB Technical Services. Change streams — Change streams enable developers to build real-time, event-driven applications that react to data changes as they happen. This allows them to build more complex features and better end-user experiences. The paradox of choice for developers Developers today have no shortage of tools to work with, but the abundance of options is itself a problem. And when there’s little or no central decision-making, developers are forced to figure out how to stitch together a patchwork of technology solutions to create the seamless user experiences that consumers have come to expect. Developers had fewer choices when applications were built on a three-tier framework composed of a relational database, a J2EE stack, and an app or web server. Since then, however, application development has fragmented into different architectures, SDKs, and cloud services, leaving developers many more patterns to figure out. On top of that, the rise of DevOps has increased the pressure on developers to build and maintain the tools they’re working with, and serious development shops often take pride in building their own toolchains, backends, and databases. Put it all together — the abundance of choices, the patchwork nature of solutions, the pressure to build and maintain toolchains, and the glue code keeping it all together — and it adds up to more cognitive load, elevated stress levels, and a lengthening of time to value. As Stephen O’Grady from analyst firm RedMonk explains , “Developers are forced to borrow time from writing code and redirect it toward managing the issues associated with highly complex, multifactor developer toolchains held together in places by duct tape and baling wire. This, then, is the developer experience gap.” Having a lot of options is a good thing — until it’s not. One way we’re working to unwind the paradox of choice is by providing tools that exist in the same form whether in the cloud or on the client — that is, solutions that integrate with the way developers already work. This could mean plugging into a CLI first, abstracting provisioning, simplifying and securing the data layer so developers don’t have to worry about it, and unlocking the creativity of developers with a data model that maps to how data is actually going to be used. We’re also enabling developers to access the tools they need from within MongoDB without having to integrate myriad bolt-on tools (i.e., the paradox of choice). Building at velocity The key to unlocking developer productivity, as we see it, is giving developers the building blocks they need to create a whole workload from scratch, or to bring a new workload into their ecosystem — be it time-series, search, or analytics — and have them run on a single platform instead of having to stitch together disparate systems. Our goal is to bring a modern data layer to modern applications. We want to bring that experience to more and more of what you work on. We know that modern applications have complicated data requirements, but that shouldn’t mean complicated data infrastructure. We want to serve most of your workloads with a single unified platform. Learn more about MongoDB World 2022 announcements at mongodb.com/new and in these stories: 5 New Analytics Features to Accelerate Insights and Automate Decision-Making 4 New MongoDB Features to Improve Security and Operations Streamline, Simplify, Accelerate: New MongoDB Features Reduce Complexity

June 7, 2022
Applied

Streamline, Simplify, Accelerate: New MongoDB Features Reduce Complexity

At MongoDB World 2022 , we announced several developer-centric features that provide more powerful analytics, streamline operations, and reduce complexity. In this post, we look at MongoDB Atlas Data Federation , MongoDB Atlas Search , MongoDB Atlas Device Sync and its Flexible Sync, and change streams. As consumer expectations of the applications they use grow, developers must continue to create richer experiences. To do that, many are adding a variety of data systems and components to their architectures, including single-purpose NoSQL datastores, dedicated search engines, and analytics systems. Piecing these disparate systems together adds complexity to workflows, schedules, and processes, however. For instance, one application could utilize a solution for database management, another solution for search functionality, and a third solution for mobile data sync. Even within an organization, teams often use different products to perform the same tasks, such as data analysis. This way of building modern applications often causes significant problems, such as data silos and overly complex architectures. Additionally, developers are forced to spend extra time and effort to learn how each of these components functions, to ensure they work together, and to maintain them over the long term. It should not be the developer’s job to rationalize all these different technologies in order to build rich application experiences. The developer data platform For developers and their teams, cobbling together a data infrastructure from disparate components is inefficient and time-consuming. Providers have little incentive to ensure that their solutions can function alongside the products of their competitors. Further, internal documentation, which is key to demystifying the custom code and shortcuts in a bespoke architecture, might not be available or current, and organizational knowledge gets lost over time. MongoDB Atlas, our developer data platform , was built to solve these issues. An ecosystem of intuitive, interlinked services, Atlas includes a full array of built-in data tools, all centered around the MongoDB Atlas database. Features are native to MongoDB, work with a common API, are designed for compatibility, and are intended to support any number of use cases or workloads, from transactional to operational, analytics to search, and anything in between. Equally important, Atlas removes the hidden, manual work of running a sprawling architecture, from scaling infrastructure to building integrations between two or more products. With these rote tasks automated or cleared away, developers are free to focus on what they do best: build, iterate, and release new products. MongoDB Atlas Data Federation MongoDB Atlas Data Federation allows you to write a single query to work with data across multiple sources, such as your Amazon S3, Atlas Data Lake , and MongoDB Atlas clusters. Atlas Data Federation is not a separate repository of data, but a service to combine, enrich, and transform data across multiple sources, regardless of origin, and output to your preferred location. With Atlas Data Federation, developers who want to aggregate data or federate queries do not need to use complex data pipelines or time-consuming transformations — a key advantage for those seeking to build real-time app features. Atlas Data Federation also makes it easier to quickly convert MongoDB data into columnar file formats, such as Parquet or CSV, so you can facilitate ingestion and processing by downstream teams that are using a variety of different analytics tools. MongoDB Atlas Search Rich, responsive search functionality has become table stakes for both consumer-facing and internal applications. But building high-quality search experiences isn’t always easy. Developers who use a third-party, bolt-on search engine to build search experiences have to deal with problems like the need to sync data between multiple systems; more operational overhead for scaling, securing, and provisioning; and using different query interfaces for database and search. Built on the industry-leading Apache Lucene search library, MongoDB Atlas Search is the easiest way to build rich, fast, and relevant search directly into your applications. It compresses three systems — database, search engine, and sync mechanism — into one, so developers don’t have to deal with the problems that bolt-on search engines introduce. It can be enabled with a few API calls or clicks and uses the same query language as the rest of the MongoDB product family. Atlas Search provides all of the features developers need for rich, personalized search experiences to users, like facets , now generally available, which offers users a way to quickly filter and navigate search results. With facets, developers can index data to map fields to categories like brand, size, or cost, and update query results based on relevance. This allows users to easily define multiple search criteria and see results updated in near real-time. MongoDB Atlas Device Sync With apps such as TikTok, Instagram, and Spotify, mobile users have come to expect features such as real-time updates, reactive UIs, and an always-on, always-available experience. While the user experience is effortless, building these abilities into a mobile app is anything but. Such features require lots of time and resources to develop, test, debug, and maintain. MongoDB Atlas Device Sync is designed to help developers address mobile app data challenges, including limited connectivity, dead zones, and multiple collaborators (all with varying internet speeds and access) by gathering, syncing, and resolving any sync conflicts between the mobile database and MongoDB Atlas — without the burden of learning, deploying, and managing separate data technologies. At World 2022, MongoDB announced Flexible Sync, a new way to sync data between devices and the cloud. Using Flexible Sync, developers can now define synced data using language-native queries and fine-grained permissioning, resulting in a faster, more seamless way of working — and one analogous to the way developers code and build. Previously, developers had to sync full partitions of data; Flexible Sync enables synchronization of only the data that’s relevant. With support for filter logic, asymmetric sync, and hierarchical permissioning, Flexible Sync can reduce the amount of required code by 20% or more, and speed up build times from months to weeks. Change Streams Data changes quickly, and your applications need to react just as quickly. When a customer’s order is shipped, for instance, they expect an in-app or email notification — and they expect it immediately. Yet building applications that can respond to events in real time is difficult and often requires the use of polling infrastructure or third-party tools, both of which add to developer overhead. Latency and long reaction times result in data that is outdated, and poor experiences for users of that data. Like Atlas’s Database Triggers, change streams enable developers to build event-driven applications and features that react to data changes as they happen. Along with reducing the complexity and cost of building this infrastructure from scratch, the new change stream enhancements (available in MongoDB 6.0) will enable you to determine the state of your database before and after an event occurs, so you can act on the changes and build business logic, analytics, and policies around it. That opens up new use cases, such as retrieving a copy of a document immediately after it is updated. All of these updates and new capabilities focus on the critical need to eliminate complexity in order to build, deploy, and secure modern applications in any environment. Together, MongoDB helps solve what MongoDB president and CEO Dev Ittycheria called a key developer challenge in his MongoDB World 2022 keynote: reducing the friction and cost of working with data. Learn more about MongoDB World 2022 announcements at mongodb.com/new and in these stories: 5 New Analytics Features to Accelerate Insights and Automate Decision-Making 4 New MongoDB Features to Improve Security and Operations Closing the Developer Experience Gap: MongoDB World Announcements

June 7, 2022
Applied

4 New MongoDB Features to Improve Security and Operations

Data platforms are designed to remove operational complexity and enable developers to move and innovate faster. For applications that are critical to your users and your business, the data platform powering them must also be reliable, scalable, and global. Achieving that should take minimal work, both upfront and on an ongoing basis. At MongoDB World 2022, we announced several new capabilities that further help organizations achieve operational excellence: Queryable Encryption , Cluster-to-Cluster Sync , Scheduled Archiving , MongoDB Atlas Operator for Kubernetes , and MongoDB Atlas Serverless . With the introduction of Queryable Encryption , MongoDB will be the only database provider that allows customers to run expressive queries such as equality and range, prefix, suffix, substring and more on fully randomly encrypted data, just as they can do on unencrypted data. This is a huge advantage for organizations that need to run expressive queries while also securing their data. Queryable Encryption reduces the heavy lifting involved when working with encrypted data, resulting in faster app development without undermining data protection or compliance with data privacy regulations. Not every organization is fully — or may ever be fully — in the cloud. Many businesses also leverage hybrid or multi-cloud environments. Cluster-to-Cluster Sync enables continuous, uni-directional, real-time data synchronization of two MongoDB clusters in the same or different environments — public cloud, private cloud, on-premises, and at the edge. MongoDB now supports, for example, hybrid Atlas and Enterprise Advanced deployments, wherein a cluster’s data can be synced from on-prem to Atlas, or vice versa. With Cluster-to-Cluster Sync, organizations have full control of the synchronization process. They can decide when to start, stop, pause, or resume your synchronization, or to reverse the direction of synchronization. And they can monitor the progress of the synchronization in real time. This new capability will enable greater experimentation and innovation, increase organizational insights, and help developers find more efficient ways to work with data. Use cases that benefit from having the data of two MongoDB clusters fully synchronized include data migration, enhanced development lifecycles, dedicated analytics, audit compliance, and improving latency by moving data to the edge. The MongoDB Atlas Operator for Kubernetes is the best way to use MongoDB with Kubernetes. With the Atlas Operator, developers can seamlessly integrate MongoDB Atlas into their Kubernetes deployment pipeline, controlling Atlas resources without leaving the Kubernetes control plane. They can also control Atlas projects, clusters, database users, backup policy, serverless instances, private network endpoints, and more. The operator is compatible with any certified Kubernetes distribution, including Amazon Elastic Kubernetes Service (EKS), Google Kubernetes Engine (GKE), Azure Kubernetes Service (AKS), Red Hat OpenShift, and dozens more. We are enhancing the Online Archive feature of Atlas with two new features: Data expiration and scheduled archiving. With data expiration, you can define and automate for how long you need the data stored in the online archive before getting deleted. With the scheduled archiving feature, you can set rules about the time window of when you want the archive to run. This could be daily, weekly or monthly. You can also edit the archive rule and define when you want to archive your data and when you want it deleted from the archive. One big trend in the developer world is removing operational overhead by moving to a managed database offering. This move away from day-to-day management and administration lets developers do what they do best — create. To this end, MongoDB has rolled out Atlas Serverless . With Atlas Serverless, server provisioning and management has been abstracted (hidden) from the customer or end-user of the service. This eliminates the cognitive load of sizing and scaling infrastructure to keep up with application demand. Instead of paying for idle resources, with Atlas Serverless, you pay for only what you use. By simplifying provisioning, Atlas Serverless helps organizations accelerate time to market and improve experiences for both developers and IT managers. All of these new features have been designed to help organizations improve their operational excellence, ensuring security, consistency, and scale while alleviating repetitive operational tasks for developers and IT managers. Learn more about MongoDB World 2022 announcements at mongodb.com/new and in these stories: 5 New Analytics Features to Accelerate Insights and Automate Decision-Making Closing the Developer Experience Gap: MongoDB World Announcements Streamline, Simplify, Accelerate: New MongoDB Features Reduce Complexity

June 7, 2022
Applied

Digital Underwriting: A Digital Transformation Wave in Insurance

Underwriting processes are at the core of insurance companies, and their effectiveness is directly related to insurers’ profitability and success. Despite this fact, underwriting is often one of the most underserved parts of the insurance industry from a technology perspective. There may be sophisticated policy, customer, and claim administration systems, but underwriters often find themselves wrangling data from a variety of sources, into spreadsheets, in order to adequately evaluate the financial risks that new applicants and scenarios might bring, and translate them into appropriate pricing and coverage decisions. Due to the complexity and variety of information and sources required to be accessed and integrated, modernized underwriting platforms have often been a difficult objective to achieve for many insurers. The cost and time associated with building such systems, and the possibility of minimal short-term return on investment, have also made it difficult for leaders to secure funding and support within their organizations. These factors have required underwriters to persist manual processes, which, at best, are often highly inefficient. At worst, they do not sufficiently position an insurer to be competitive in the digitally disrupted future of insurance delivery. It does not have to be this way, however. This blog post highlights ways in which insurance companies can leverage new technology, and incorporate modern architecture paradigms into their information systems, in order to revolutionize their underwriting workflows. The underwriting revolution Technology is changing the way organizations operate and measure risk. New technological advancements in the IoT, Manufacturing, and Automotive space, just to mention a few, are driving insurers to develop new underwriting paradigms personalized to each individual, and adjusted based on real-time data. This is already a reality, with some insurers leveraging personal wearable technology to assess the fitness level of clients and adjust life and health insurance premiums accordingly. We are only at the beginning; let’s explore what this might look like in 2030. Imagine a scenario , where a professional, living in a major urban area, orders a self-driving car through his digital assistant to get to a meeting. The assistant is directly linked to the user’s insurer, which allows the insurer to automatically calculate the best possible route taking into account the time required, past accident history, and current traffic conditions so that the likelihood of car damage and accidents is minimized. If the user decides to drive him or herself that day or picks a different route, the mobility premium will be set to increase based on real-time variables of the journey. The user’s mobility insurance can be linked to other services, such as a life insurance policy, which can also be subject to increase depending on the commute’s risk factors. We don’t have to wait for 2030, for a scenario like this to come to fruition. Thanks to advances in IoT devices, mobile computing, and deep learning techniques mimicking the human brain's perception, reasoning, learning, and problem-solving, many of these capabilities can be made a reality here in 2022. While the insurance industry continues to innovate, the underwriting process is under constant evolution as a result. Certainly, in the scenario described above, the Underwriting decision-making process has shifted from a spreadsheet-based, manual one, to one that is fully automated, with AI/ML decision support. The insurers who can achieve this will retain and gain a significant competitive advantage over the next decade. Technology can help streamline new cases Underwriters are notoriously faced with administrative complexity when managing any new case, regardless of the risk profile or level. In the commercial insurance space, agents and brokers are generally used as a bridge between the insurer and the insured. Email exchanges amongst parties are common, which can often lack sufficient detail, and require the underwriter to chase missing data in order to successfully close the sale and acquisition of new business. Issues with data quality, or lack of certain key pieces of information, can be addressed by implementing automated claim procedures leveraging Natural Language Processing (NLP), Optical Character Recognition (OCR), and rich text analysis to programmatically extract data from email and other forms of written communication, alert the agent in case of missing information, and even attempt to automatically enrich missing information in order to facilitate a close of the sale. What’s described above is only the beginning of what’s possible to achieve when we begin to think about what we can do to bolster and augment underwriting procedures within an insurer. Sanding off the rough edges by reducing manual procedures, and helping underwriters focus less on non-differentiating work, and more on high-value activities, can not only alleviate significant pain and frustration of the underwriter, but it can help grow the book of business, by offering more competitive pricing, products, and turn-around times. Triaging times can be drastically reduced Insurance providers seeking to grow their book of business, and expand the channels through which they sell, may have to deal with a surge of new coverage requests and changing risk scenarios. However, many insurers may be unprepared to handle such increases in new business intake volumes. Because of legacy systems, workflow, and resource bottlenecks, it’s possible that a significant uptick in new business could actually result in a negative outcome for the insurer, due to the inability to process it in a timely and efficient manner. Could you lose business to a competitor because it could not be underwritten in time? Augmenting traditional workflows with automation and Machine Learning algorithms can begin to address this challenge. How can you do more, without significantly burdening or expanding your underwriting team? Many insurers are beginning to automatically classify and route such increases in business demand, using AI/ML. A first step in the underwriting process, after initial intake and enrichment, is triaging, or deciding who can best underwrite the given request. Often, this is also a manual process, relying heavily on someone within the organization who knows how to best route the flow of work, based on the skills and experience of the underwriting staff. As with the ability to detect the need for, and enrich the initial submission intake, Machine Learning algorithms can also be leveraged to ease the burden, and reduce the human bottleneck of routing the intake work to the best suited underwriter. Risk assessment processes can be made more effective Once the intake of new cases has been automated and triaged, we need to think about how to streamline the risk assessment process. Does every single new business case need to be priced and adjusted by an actual underwriter? If we can triage and determine who should work on the new case, can we also then route some of the low-risk work to a fully-automated pricing and underwriting workflow? Can we begin to save the precious time of our underwriting staff for the higher-touch business and accounts that truly need their attention and expertise? Automated risk assessment has roots in rule-based expert systems dating back to the 1990s. These systems contained tens of thousands of hard-coded underwriting rules that could assess medical, occupational, and advocational risk. These systems became very complex over the years and still play an essential role in underwriting. ML algorithms can enhance the performance of these systems by fine-tuning underwriting rules and finding new patterns of risk information. The vast amount of data available to insurers can also be used to predict the risk of new cases and scenarios. Once the risk profile of a new case has been established, a pricing model can be applied to programmatically derive the policy cost and communicate it to the prospective client without involving the underwriting team, as imagined in the 2030 scenario we mentioned earlier in the article. Conclusion and follow-up There are plenty of digital transformation opportunities in the insurance industry. More specifically, focusing on underwriting will help new and existing players in the insurance industry gain a significant competitive advantage in the coming decade. Whether human-based or AI/ML augmented, underwriting decisions will be underpinned by an ever-growing variety and volume of complex data. In the next blog of the series, Riding the Transformation Wave with MongoDB , we’ll dive deeper into how MongoDB helps insurance innovators create, transform and disrupt the industry by unleashing the power of software and data. Stay tuned! Contact us to learn how MongoDB is helping insurance innovators create, transform, and disrupt the industry by unleashing the power of software and data.

June 2, 2022
Applied

Connected Healthcare Data: Interoperability to Solve Fragmentation and Drive Better Patient Outcomes

Many differences exist across healthcare systems around the globe, but there is one unfortunate similarity: fragmentation. Fragmentation is a consequence of the inability of various healthcare organizations (both public and private) to communicate with each other or to do so in a timely or consistent manner, and it can have a dramatic impact on patient and population well-being. Interoperability and communication A patient can visit a specialist for a specific condition and the family doctor for regular checkups, perhaps even on the same day. But how can both doctors make appropriate decisions if patient data is not shared between them? Fragmented healthcare delivery, as described in this scenario, also leads to data fragmentation. Such data fragmentation can cause misdiagnosis and services duplication. It can also lead to billing issues, fraud, and more, causing preventable harm and representing a massive economic burden for healthcare systems worldwide. To improve healthcare fragmentation, we need truly interoperable health data. The longitudinal patient record A longitudinal patient record (LPR) is a full, life-long view of a patient’s healthcare history and the care they’ve received. It’s an electronic snapshot of every interaction patients have, regardless of provider and service. Ideally, this record can be shared across any or all entities within a country’s healthcare system. The LPR represents a step beyond the electronic health record, extending past a specific healthcare network to a regional or national level. It’s critical that LPRs use the same data format and structure to guarantee the ability of healthcare providers to easily and quickly interact with them. Data standards for LPRs are key to interoperability and can help address healthcare fragmentation, which, in turn, can help save lives by improving care. FHIR Fast Healthcare Interoperability Resources (FHIR) is a commonly used schema that comprises a set of API and data standards for exchanging healthcare data. FHIR enables semantic interoperability to allow effective communication between independent healthcare institutions and essentially defines “how healthcare information can be exchanged between different computer systems regardless of how it is stored in those systems” ( ONC Fact Sheet, “What is FHIR?” ). FHIR aims to solve the fragmentation problem of the healthcare system by directly attacking the root of the problem: miscommunication. As is the case for many other modern communication standards (for example, ISO 20022 for finance ), FHIR builds its REST API from a JSON schema. This foundation is convenient, considering most modern applications are built with object-oriented programming languages that have JSON as the standard file and data interchange format. This approach also makes it easier for developers to build applications, which is perhaps the most important point: The future of healthcare delivery may increasingly depend on the creation of applications that will transform how patients and providers interact with healthcare systems for the better. MongoDB: FHIR and healthcare app-ification MongoDB is a document database and is therefore a natural fit for building FHIR applications. With JSON as the foundation of the MongoDB document model developers can easily store and retrieve data from their FHIR APIs to and from the database, with no translation or change of format needed. In fact, organizations can adopt FHIR resources as the basis of a new, canonical data model that existing internal systems can begin to shift and conform to. One example is the Exafluence FHIR API , which is built on top of MongoDB. Exafluence's API allows for real-time data interchange by leveraging Apache Kafka and Spark, in either an on-premise or multi-cloud deployment. Software teams leveraging the Exafluence solution have experienced velocity increases of their FHIR interoperability projects by 40% to 60% . MongoDB's tool set can develop value-add business solutions on the FHIR-native dataset — without ETL. Beyond FHIR , the trend toward healthcare app-ification (i.e., the increasing use of applications in healthcare) clashes with pervasive legacy architectures, which typically are not optimized for the developer experience. Because of this reliance on legacy architectures, modernization or transformation initiatives often fail to take hold or are postponed as companies perceive the risks to be too high and the return on investment is not evident. It doesn’t have to be this way, however. MongoDB’s industry-proven iterative approach to modernization reduces the risk of application and infrastructure migration and unlocks developer productivity and innovation. Interoperable, modern healthcare applications can now be built in a developer-friendly environment, with all the benefits expected from traditional databases (i.e., ACID transactions, expressive query language, and enterprise-grade security). MongoDB provides the freedom for solutions to be deployed anywhere (e.g., on-premises, multi-cloud), providing a major advantage for healthcare organizations, which typically have multi-environment deployments. Healthcare and the cloud Digital healthcare will accelerate the adoption of cloud technologies within the industry, enabling innovation at scale and unlocking billions of dollars in value. Healthcare organizations, however, have so far been reluctant to move workloads to the cloud, mostly because of data privacy and security concerns. To support such cloud adoption initiatives, MongoDB Atlas offers a unique multi-cloud data platform , integrating MongoDB in a fully managed environment with enterprise-grade security measures and data encryption capabilities. MongoDB Atlas is HIPPA-ready and a key facilitator for GDPR compliance. A holistic view of patient care Interoperable healthcare records and communication standards will make longitudinal patient records possible by providing a much-sought-after holistic view of the patient, helping to fix healthcare fragmentation. Many challenges still exist, including transforming legacy infrastructures into modern, flexible data platforms that can adapt to the exponential changes happening in the healthcare industry. MongoDB provides a developer data platform designed to unlock developer productivity and ultimately giving healthcare organizations the power to focus on what matters most: the patient. Learn more about how MongoDB supports healthcare organizations .

May 26, 2022
Applied

What Does the Executive Order on Supply Chain Security Mean for Your Business? Security Experts Weigh In on SBOMs

In the wake of high-profile software supply chain attacks, the White House issued an executive order requiring more transparency in the software supply chain. The Executive Order (14028) on Improving the Nation’s Cybersecurity requires software vendors to provide a software bill of materials (SBOM). An SBOM is a list of ingredients used by software — that is, the collection of libraries and components that make up an application, whether they are third-party, commercial off-the-shelf, or open source software. By providing visibility into all the individual components and dependencies, SBOMs are seen as a critical tool for improving software supply chain security. The new executive order affects every organization that does or seeks to do business with the federal government. To learn more about the requirements and implementation, MongoDB invited a few supply chain security experts for a panel discussion. In our conversation, Lena Smart, MongoDB’s Chief Information Security Officer, was joined by three expert panelists: Dr. Allan Friedman, PhD, senior advisor and strategist, CISA; Clinton Herget, principal solutions engineer, Snyk; and Patrick Dwyer, CycloneDX SBOM project co-lead, Open Web Application Security Project. Background In early 2020, hackers broke into Texas-based SolarWind's systems and added malicious code to the company's Orion software system, which is used by more than 33,000 companies and government entities to manage IT resources. The code created a backdoor into affected systems, which hackers then used to conduct spying operations. In December 2021, a vulnerability in the open source Log4J logging service was discovered. The vulnerability enables attackers to execute code remotely on any targeted computer. The vulnerability resulted in massive reconnaissance activity, according to security researchers, and it leaves many large corporations that use the Log4J library exposed to malicious actors. Also in late 2021, the Russian ransomware gang, REvil, exploited flaws in software from Kaseya, a popular IT management application with MSPs. The attacks multiplied before warnings could be issued, resulting in malicious encryption of data and ransom demands as high as $5 million. In our panel discussion, Dr. Friedman kicked off the conversation by drawing on the “list of ingredients” analogy, noting that knowing what’s in the package at the grocery store won’t help you keep your diet or protect you from allergens by itself — but good luck doing so without it. What you do with that information matters. So the data layer is where we will start to see security practitioners implement new intelligence and risk-awareness approaches, Friedman says. SBOM Use Cases The question of what to do with SBOM data was top-of-mind for all of the experts in the panel discussion. Friedmans says that when the idea of SBOMs was first introduced, it was in the context of on-premises systems and network or firewall security. Now, the discussion is centered on SaaS products. What should customers expect from an SBOM for a SaaS product? As Senior Advisor and Strategist at the Cybersecurity and Infrastructure Security Agency (CISA), Friedman says this is where the focus will be over the next few months as they engage in public discussions with the software community to define those use cases. A few of the use cases panelists cited included pre-purchase due diligence, forensic and security analysis, and risk assessment. “At the end of the day, we're doing this hopefully to make the world of software more secure,” Smart says. No one wants to see another Log4J, the panelists agreed, but chances are we'll see something similar. A tool such as an SBOM could help determine exposure to such risks or prevent them from happening in the first place. Dwyer waded into the discussion by emphasizing the need for SBOM production and consumption to fit into existing processes. “Now that we're automating our entire software production pipeline, that needs to happen with SBOMs as well,” Dwyer says. Herget agreed on the need to understand the use cases and edge cases, and to integrate them. “If we're just generating SBOMs to store them in a JSON file on our desktop, we’ve missed the point,” he says. “It's one thing to say that Maven can generate an SBOM for all Java dependencies in a given project, which is amazing until you get to integrating non-Java technologies into that application.” Hergert says that in the era of microservices, you could be working with an application that has 14 different top-level languages involved, with all of their corresponding sets of open source dependencies handled by an orchestrated, cloud-based continuous integration pipeline. “We need a lot more tooling to be able to do interesting things with SBOMs,” Herget continued. “Wouldn't it be great to have search-based tooling to be able to look at dependency tree relationships across the entire footprint?” For Herget, future use cases for SBOM data will depend on a central question: What do we have that is a scalable, orchestrated way to consume SBOM data that we can then throw all kinds of tooling against to determine interesting facts about our software footprint that we wouldn't necessarily have visibility into otherwise? SBOMs and FedRAMP In the past few years, Smart has been heavily involved in FedRAMP (Federal Risk and Authorization Management Program), which provides a standardized approach to government security authorizations for Cloud Service Offerings. She asked the panelists whether SBOMs should be part of the FedRAMP SSP (System Security Plan). Friedman observed that FedRAMP is a “passed once, run anywhere” model, which means that once a cloud service is approved by one agency, any other government agency can also use it. “The model of scalable data attestations that are machine-readable I think does lend itself as a good addition to FedRAMP,” Friedman says. Herget says that vendors will follow if the government chooses to lead on implementing SBOMs. “If we can work toward a state where we're not talking about SBOMs as a distinct thing or even an asset that we're working toward but something that’s a property of software, that's the world we want to get to.” The Role of Developers in Supply Chain Security As always, the role of the developer is one of the most critical factors in improving supply chain security, as Herget points out. “The complexity level of software has exceeded the capacity for any individual developer, even a single organization, to understand where all these components are coming from,” Herget says. “All it takes is one developer to assign their GitHub merge rights to someone else who's not a trusted party and now that application and all the applications that depend on it are subject to potential supply chain attack.” Without supply chain transparency or visibility, Herget explains, there’s no way to tell how many assets are implicated in the event of an exploit. And putting that responsibility on developers isn’t fair because there are no tools or standardized data models that explain where all the interdependencies in an application ultimately lead. Ingredient lists are important, Herget says, but what’s more important are the relationships between them, which components are included in a piece of software and why, who added them and when, and to have all that in a machine-readable and manipulable way. “It's one thing to say, we have the ingredients,” Herget says, “But then what do you do with that, what kind of analysis can you then provide, and how do you get actionable information in front of the developer so they can make better decisions about what goes into their applications?” SBOM Minimum Requirements The executive order lays out the minimum requirements of an SBOM, but our panelists expect that list of requirements to expand. For now, there are three general buckets of requirements: Each component in an SBOM requires a minimum amount of data, including the supplier of the component, the version number, and any other identifiers of the component. SBOMs must exist in a widely used, machine-readable format, which today is either CycloneDX or SPDX . Policies and practices around how deep the SBOM tree should go in terms of dependencies. Moving forward, the panelists expect the list of minimum requirements to expand to include additional identifiers, such as a hash or digital fingerprint of a component, and a requirement to update an SBOM anytime you update software. They also expect additional requirements for the dependency tree, like a more complete tree or at least the ability to generate the complete tree. “Log4j taught people a lot about the value of having as complete a dependency tree as possible,” Friedman said, “because it was not showing up in the top level of anyone's dependency graph.” SBOMs for Legacy Systems One of the hurdles with implementing SBOMs universally is what to do with legacy systems, according to Smart. Johannes Ullrich, Dean of Research for SANS Technology Institute, has said that it may be unrealistic to expect 10- or 20-year-old code to ever have a reasonable SBOM. Friedman pointed to the use of binary analysis tools to assess software code and spot vulnerabilities, noting that an SBOM taken from the build process is far different from one built using a binary analysis tool. While the one taken from the build process represents the gold standard, Friedman says, there could also be incredible power in the binary analysis model, but there needs to be a good way to compare them to ensure an apples-to-apples approach. “We need to challenge ourselves to make sure we have an approach that works for software that is in use today, even if it's not necessarily software that is being built today,” Herget says. As principal solutions engineer at Snyk, Herget says these are precisely the conversations they’re having around what is the right amount of support for 30-year-old applications that are still utilized in production, but were built before the modern concept of package management became integrated into the day-to-day workflows of developers. “I think these are the 20% of edge cases that SBOMs do need to solve for,” Herget says, “Because if it’s something that only works for modern applications, it's never going to get the support it needs on both the government and the industry side.” Smart closed the topic by saying, “One of the questions that we've gotten in the Q&A is, ‘What do you call a legacy system?’ The things that keep me awake at night, that's what I call legacy systems.” Perfect Ending Finally, the talk turned to perfection, how you define it, and whether it’s worth striving for perfection before launching something new in the SBOM space. Herget, half-joking, said that perfection would be never having these talks again. “Think about how we looked at DevOps five or 10 years ago — it was this separate thing we were working to integrate within our build process,” he says. “You don’t see many panel talks on how we will get to DevOps today because it's already part of the water we’re all swimming in.” Dwyer added that perfection to him is when SBOMs are just naturally embedded in the modern software development lifecycle — all the tooling, the package ecosystems. “Perfection is when it's just a given that when you purchase software, you get an SBOM, and whenever it's updated, you get an SBOM, but you actually don't care because it's all automated, Dwyer says. “That’s where we need to be.” According to Friedman, one of the things that SBOMs has started to do is to expose some of the broader challenges that exist in the software ecosystem. One example is software naming and software identity. Friedman says that in many industries, we don't actually have universal ways of naming things. “And, it’s not that we don't have any standards, it’s that we have too many standards,” he explains. “So, for me, perfection is saying SBOMs are now driving further work in these other areas of security where we know we've accumulated some debt but there hasn't been a forcing function to improve it until now.”

May 23, 2022
Applied

Ready to get Started with MongoDB Atlas?

Start Free