MongoDB Developer

Coding with MongoDB - news for developers, tips and deep dives

Revolutionizing Data Storage and Analytics with MongoDB Atlas on Google Cloud and HCL

Every organization requires data they can trust—and access—regardless of its format, size, or location. The rapid pace of change in technology and the shift towards cloud computing is revolutionizing how companies handle, govern and manage their data by freeing them from the heavy operational burden of on-premise deployments. Enterprises are looking for a centralized, cost-effective solution that allows them to scale their storage and analytics so they can ingest data and perform artificial intelligence (AI) and machine learning (ML) operations, ultimately expanding their marketing horizon. This blog post explores why companies should partner with MongoDB Atlas on Google Cloud to begin their data revolution journey, and how HCL Technologies can support customers looking to migrate. MongoDB Atlas as the distributed data platform MongoDB Atlas is the leading database-as-a-service on the market for three main reasons: Unparalleled developer experience - allows organizations to bring new features to market at a high velocity Horizontal scalability - supports hundreds of terabytes of data with sub-second queries Flexibility - stores data to meet various regulatory, operational, and high availability requirements. The versatility offered by MongoDB’s document model makes it ideal for modern data-driven use cases that require support for structured, semi-structured, and unstructured content all within a single platform. Its flexible schema allows changes to support new application features without costly schema migrations typically required with relational databases. MongoDB Atlas extends the core database by offering services like Atlas Search and MongoDB Realm that are a necessity for modern applications. Atlas Search provides a powerful Apache Lucene-based full text search engine that automatically indexes data in your MongoDB database without the need for a separate dedicated search engine or error-prone replication processes. Realm provides edge-to-cloud sync and backend services to accelerate and simplify mobile and web development. Atlas’ distributed architecture supports horizontal scaling for data volume, query latency, and query throughput which offers the scalability benefits of distributed data storage alongside the rich functionality of a fully-featured general purpose database. MongoDB Atlas is unique in its ability to provide the most wanted database as a managed service and is relied on by the world’s largest companies for their mission-critical production applications. Innovation powered by collaboration with HCL Technologies MongoDB’s versatility as a general-purpose database, in addition to its massive scalability, makes it a perfect foundation for analytics, visualization, and AI/ML applications on Google Cloud. As an MSP partner for Google Cloud, HCL Technologies helps enterprises accelerate and risk-mitigate their digital agenda, powered by Google Cloud. We’ve successfully implemented applications leveraging MongoDB Atlas on Google Cloud, building upon MongoDB’s flexible JSON-like data model, rich querying and indexing, and elastic scalability in conjunction with Google Cloud’s class-leading cloud infrastructure, data analytics, and machine learning capabilities. HCL is working with some of the world’s largest enterprises in building secure, performant, and cost-effective solutions with MongoDB and Google. Possessing technical expertise in Google Cloud, MongoDB, machine learning, and data science, our dedicated team developed a reference architecture that ensures high performance and scalability. This is simplified by MongoDB Atlas’ support for Google Cloud services which allows it to essentially operate as a cloud-native solution. Highlighted features include: Integration with Google Cloud Key Management Service Use of Google Cloud’s native storage snapshot for fast backup and restore Ability to create read-only MongoDB nodes in Google Cloud to reduce latency with Google Cloud-native services regardless of where the primary node is located (even other public cloud providers!) Integrated billing with Google Cloud Ability to span a single MongoDB cluster across Google Cloud regions worldwide, and more As represented in Figure 1 below, MongoDB Atlas on Google Cloud can be used as a single database solution for transactional, operational, and analytical workloads across a variety of use cases. Figure 1: MongoDB's core characteristics and features The following architecture in Figure 2 demonstrates the ease of reading and writing data to MongoDB from Google Cloud services. Dataflow, Cloud Data Fusion, and Dataproc can be leveraged to build data pipelines to migrate data from heterogeneous databases to MongoDB and to feed data to create interactive dashboards using Looker. These data pipelines support both batch and real-time ingestion workloads and can be automated and orchestrated using Google Cloud - native services.. Figure 2: MongoDB Atlas' integration with core Google Cloud services A data platform built using MongoDB Atlas and Google Cloud offers an integrated suite of services for storage, analysis, and visualization. Address your business challenges with HCL: Industry use cases Data-driven solutions built with MongoDB Atlas on Google Cloud have multiple applications across industries such as financial services, media and entertainment, healthcare, oil and gas, energy, manufacturing, retail, and the public sector. Every industry can benefit from this highly integrated storage and analytical solution. Use Cases and Benefits Data lake modernization with low cost and high availability for media and entertainment customers: Maintaining high availability and a low-cost data lake is an obstacle for any online entertainment platform that builds mobile or web ticketing applications. However, building on Google App Engine with MongoDB Atlas Clusters in the backend allows for a high-availability, low-cost data platform that seamlessly feeds data to downstream analytics platforms in real time. Unified data platform for retail customers: The retail business frequently requests an agile environment in order to encourage innovation among its engineers. With its agility in scaling and resource management, seamless multi-region clusters, and premium monitoring, running MongoDB Atlas on Google Cloud is a fantastic choice for building a single data platform. This simplifies the management of different data platforms and allows developers to focus on new ideas. High-speed real-time data platform of supply chain system for manufacturing units: By having real-time visibility and distributed data services, supply chain data can become a competitive advantage. MongoDB Atlas on Google Cloud provides a solid foundation for creating distributed data services with a unified, easy-to-maintain architecture. The unrivaled speed of MongoDB Atlas simplifies supply chain operations with real-time data analytics. The way forward Even in just the past decade, organizations have been forced to adapt to the extremely fast pace of innovation in the data analytics landscape: moving from batch to real-time, on-premise to cloud, gigabytes to petabytes, and the increased accessibility of advanced AI/ML models thanks to providers like Google Cloud. With our track record of success in this domain, HCL Technologies is uniquely positioned to help organizations realize the joint benefits of building data analytics applications with best-of-breed solutions from Google Cloud and MongoDB. Visit us to learn more about the HCL Google Ecosystem Business Unit and how we can help you harness the power of MongoDB Atlas and Google Cloud Platform to change the way you store and analyze your data through these solutions.

January 13, 2022
Developer

Retail Tech in 2022: Predictions for What's on the Horizon

If 2020 and 2021 were all about adjusting to the Covid-19 pandemic, 2022 will be about finding a way to be successful in this “new normal”. So what should retailers expect in the upcoming year, and where should you consider making new retail technology investments? Omnichannel is still going strong Who would have anticipated the Covid-19 pandemic would still be disrupting lives after two years? For the retail industry this means more of the same - omnichannel shopping. Despite the hope many of us had for the end of the pandemic and the gradual increase of in-person shopping, retail workers can expect to continue accommodating all kinds of shopping experiences – online shopping, brick and mortar shopping, buy online and pick up in store, reserve online and pick up in store. Even beyond the pandemic, the face of shopping is likely forever changed. This means retailers need to start considering the long-term tech investments required to meet transforming customer expectations. Adopting solutions that offer a single view of the consumer gives you the unique opportunity to personalize offerings, products and loyalty programs to their demand. With a superior consumer experience, you can achieve repeat business and increased customer loyalty. While many retailers may have thought they could “get by” with their current solutions until the pandemic ends, it’s time to rethink that approach and start exploring more long-term solutions to improve omnichannel shopping experiences. Leaner tech stacks over many specialized solutions In 2022, you should explore solutions that allow your IT teams to do more with less. The typical retail tech stack looks something like the diagram below. Legacy, relational databases are supplemented by other specialist NoSQL and relational databases, and additional mobile data and analytics platforms. As a result, retailers looking to respond quickly to changing consumer preferences and improve the customer experience face an uphill battle against siloed data, slow data processing, and unnecessary complexity. Your development teams are so busy cobbling solutions together and maintaining different technologies at once that they fail to innovate to their full potential, so you’re never quite able to pull ahead of the competition. This is the data innovation recurring tax (or DIRT) . Think of this as the ongoing tax on innovation that spaghetti architectures, like the example above, legacy architecture costs your business. As technology grows more sophisticated and data grows more complex, companies are expected to react almost instantaneously to signals from their data. Legacy technologies, like relational databases, are rigid, inefficient, and hard to adapt, making it difficult to deliver true innovation to your customers and employees in a timely manner. Your development teams are so busy cobbling solutions together that they fail to innovate to their full potential, so you’re never quite able to pull ahead of the competition. It’s time to rethink your legacy systems, and adopt solutions that streamline operations and seamlessly share data to ensure you’re working with a single source of data truth. Many retailers recognize the need to upgrade legacy solutions and get away from multiple different database technologies, but you may not know where to start. Look for modern data applications that simplify data collection from disparate sources and include automated conflict resolution for added data reliability. Also, consider what you could do with fully managed application data platforms, like MongoDB Atlas . With someone else doing the admin work, your developers are free to focus on critical work or turn their talents to innovation. Digital worker enablement will increase retention For employees, 2022 looks set to continue last year’s trend of the “ Great Resignation ”. To combat worker fatigue, and retain your workforce you need to prioritize worker engagement. One way to better engage your employees is through mobile workforce enablement. While many companies consider how to engage their customers with a more digital-friendly work environment, you shouldn’t forget about your workers in the process. Global companies like Walmart are starting to invest in mobile apps to enable their workforce. A modern, always-on retail workforce enablement app could transform the way your employees do their jobs. Features like real-time view of stock, cross-departmental collaboration, detailed product information, instant communication with other stores can simplify your workers’ experiences and help them to better serve your customers. Your workers need an always-on app that syncs with your single source of data truth, regardless of connectivity (which may be an issue as retail workers are constantly on the move). But building a mobile app with data sync capabilities can be a costly and time-intensive investment. MongoDB Realm Sync solves for this with an intuitive, object-oriented data model that is simple to use, and an out-of-the-box data synchronization service. When your mobile data seamlessly integrates with back-end systems, you can deliver a modern, distributed application data platform to your workers. Huge investment in the supply chain From microchips to toilet paper, disruptions in the supply chain were a huge issue in 2020 and 2021, and the supply chain pain continues in 2022. And while there continue to be supply chain issues beyond the control of retailers, there are steps that can be taken to mitigate some of the pain and prepare for future disruptions. Warehouse tech is getting smarter, and you need to upgrade your solutions to keep up. For starters, consider adopting the right application data platform to unify siloed data and gain a single view of operations . A single view of your data will allow for better management of store-level demand forecasts, distribution center-to-store network optimizations, vendor ordering, truck load optimizations, and much more. With a modern application data platform, all this data feeds into one, single view application, giving retailers the insights to react to supply chain issues in real time. With disruption set to dominate 2022, as it did in 2020 and 2021, investing in proactive solution upgrades could help your business not only survive, but thrive. Want to learn more about gaining a competitive advantage in the retail industry? Get this free white paper on retail modernization .

January 13, 2022
Developer

Ventana Research's Latest Report Highlights MongoDB's Role as a Cloud Data Platform Provider

Ventana Research, a market advisory and research firm, recently published an Analyst Perspective on MongoDB, noting that MongoDB and its application data platform provide businesses the ability to accelerate development and data-driven decision-making. As Ventana Research explains the evolution from traditional databases to modern, cloud-based application data platforms, the study covered multiple trends related to both the present and future of data platform software. We have identified six key trends from MongoDB that were represented in the Ventana Research Analyst Perspective. Non-relational, or NoSQL, databases are on the rise . We see this as evidence of an unprecedented, widespread change in how businesses perceive and use their databases. Cloud-based services and products are rapidly gaining popularity . Given the rise of real-time, data-driven applications, organizations are relying more and more on the flexibility, availability, and functionality of cloud-native data platforms. Such products are ideal for quickly building competitive products, delivering highly personalized experiences, and improving business agility. As a result, operational database requirements will only become more demanding . As applications become more advanced, databases will become a pivotal part of an organization’s success — or failure. We believe that in order to keep up with their applications (and their competition), companies require a comprehensive, powerful application data platform like MongoDB Atlas. Convergence is the name of the game . As companies seek out new and better operational data platforms, both relational and non-relational database providers will venture into areas that were traditionally dominated by their competitors. Examples include non-relational databases (like MongoDB) adding relational features like ACID transactions, or relational databases offering compatibility for non-relational data models like graphs or documents. Companies are increasingly opting for hybrid and multi-cloud models . MongoDB Atlas’ multi-cloud clusters enable users to leverage exclusive provider features (like Google Cloud’s AI tools), improve availability in geographic regions, or migrate data across clouds with no downtime. Non-relational, cloud-native databases are becoming more powerful — and more attractive to customers . Thanks to convergence and competition, non-relational databases are becoming ever more capable. Their advancements include real-time analytics, rich visualizations, and mobile data sync and storage. Read Ventana Research Analyst Perspectives to gain insight into the current data landscape and the possibilities of tomorrow. Updated January 17, 2022.

January 12, 2022
Developer

Data and the European Landscape: 3 Trends for 2022

The past two years have brought massive changes for IT leaders: large and complex cloud migrations; unprecedented numbers of people suddenly working, shopping and learning from home; and a burst in demand for digital-first experiences. Like everyone else, we are hoping that 2022 isn’t so disruptive (fingers crossed!), but our customer conversations in Europe do lead us to believe the new year will bring new business priorities. We’re already noticing changes in conversations around vendor lock-in, thanks to the Digital Markets Act, a new enthusiasm for combining operational and analytical data to drive new insights faster, and a more strategic embrace of sustainability. Here’s how we see these trends playing out in 2022. Digital markets act draws new attention to cloud vendor lock-in in Europe We’ve heard plenty about the European Commission’s Digital Markets Act , which, in the name of ensuring fair and open digital markets, would place new restrictions on companies that are deemed to be digital “gatekeepers” in the region. That discussion will be nothing compared to the vigorous debate we expect once the EU begins the very tricky political business of determining exactly which companies will fall under the act. If the EU sets the bar for revenues, users, and market size high enough, it’s possible that the regulation will end up affecting only Facebook, Amazon, Google, Apple, and Microsoft. But a European group representing 2,500 CIOs and almost 700 organisations is now pushing to have the regulation encompass more software companies. Their main concern centers around “distorted competition” in cloud infrastructure services and a worry that companies are being locked into one cloud vendor. A trend that will likely increase in 2022 that pushes back on cloud vendor lock-in is embracing multi-cloud strategies. We should expect to see more organisations in the region pursuing multi-cloud environments as a means to improve business continuity and agility whilst being able to access best of breed services from each cloud provider. As we have always said …”it’s fine to date your cloud provider….but don’t ever marry them.” The convergence of operational and analytical data The processing of operational and analytical data is almost always contained in different data systems, each tuned to that use case and managed by separate teams. But because that data lives in separate places, it’s almost impossible for organisations to generate insights and automate actions in real time, against live data. We believe 2022 is the year we’ll see a critical mass of companies in the region make significant progress toward a convergence of their operational and analytical data. We’re already starting to see some of the principles of microservices in operational applications, such as domain ownership, be applied to analytics as well. We’re hearing about this from so many of our customers locally, who are looking at MongoDB as an application data platform that allows them to perform queries across both real-time and historical data, using a unified platform and a single query API. This results in the applications they are building becoming more intelligent and contextual to their users, while avoiding dependencies on centralized analytics teams that otherwise slow down how quickly new, data-driven experiences can be released. Sustainability drives local strategic IT choice Technology always has some environmental cost. Sometimes that’s obvious — such as the energy needs and emissions associated with Bitcoin mining. More often, though, the environmental costs are well hidden. The European Green Deal commits the European Union to reducing emissions by 55% by 2030, with a focus on sustainable industry. With the U.N. Climate Change Conference (COP26) recently completed in Glasgow, and coming off the hottest European summer on record, climate issues have become top of mind. That means our customers are increasingly looking to make their technical operations more sustainable — including in their choice of cloud provider and data centers. According to research from IDC , more than 20% of CxOs say that sustainability is now important in selecting a strategic cloud service provider, and some 29% of CxOs are including sustainability into their RFPs for cloud services. Most interesting, 26% say they are willing to switch to providers with better sustainability credentials. Historically, it’s been difficult to make a switch like that. That’s part of the reason we built MongoDB Atlas — to give our customers the flexibility to run in any region , with any of the three largest cloud providers, and to make it easy to switch between them, and even to run a single database cluster across them. Publicly available information about the footprint of individual regions and even single data centers will make it simpler for companies to make informed decisions. Already, at least one cloud platform has added indicators to regions with the lowest carbon footprint. So while we hope 2022 will not be as disruptive as the years gone by, it will still bring seminal changes to our industry. These changes will also prompt organisations toward more agile, cohesive and sustainable data platform strategies as they seek to gain competitive advantage and exceed customer expectations. Source: IDC, European Customers Engage Services Providers at All Stages of Their Cloud Journey, IDC Survey Spotlight, Doc #EUR248484021, Dec 2021

December 21, 2021
Developer

Joyce, a Decentralized Approach to Foster Business Agility

Despite all of the tools and methodologies that have arisen in the last few years, many companies, particularly those that have been in the market for decades, struggle when it comes to leveraging their operational data to build new digital products and services. According to research and surveys conducted by McKinsey over the last few years, the success rate of digital transformations is consistently low, with less than 30% succeeding at improving their company’s performance. There are a lot of reasons for this, but most of them can be summarized in a sentence: A digital transformation is primarily an organizational and cultural change and then a technological shift. The question is not if digital transformation is a good thing nor is it if moving to the cloud is a good choice. Companies need (badly, in some cases) a digital transformation and yes, the pros of moving to the cloud usually overcome the cons. So, let’s try to dig deeper and analyze three of the main problems companies face when they go on this journey Digital products development Products by nature are customer-driven but companies run their businesses on multiple back-end systems that are instead purpose-driven. Unless you run a very small business, different people with different objectives have ownership of such products and systems. Given this context, what happens when a company wants to launch a new digital product at speed? The back-end systems (CRMs, E-commerce, ERP, etc.) hold the data they need to bring to the customer. Some systems are SaaS, some are legacy, and perhaps others are custom applications created by the company that disrupted the market with innovative solutions back in the days, the perfect recipe for integration hell. The product manager needs to coordinate and negotiate multiple change requests with the system’s owners whilst trying to convince them to add their needs in the backlog to meet the deadline. And things get even worse, as the new product relies on the computational power of the source systems, and if those systems cannot handle the additional traffic, both the product and the core services will be affected. Third-party integration “Everybody wants the change, (almost) nobody wants to change.” In this ever-growing digital world, partnering with third parties (whether they are clients or service providers) is crucial, but everyone who has tried to do so knows how challenging this is: non-standard interfaces, CSV files over FTP with fancy update rules, security issues… The list of unwanted things can grow indefinitely. SaaS everywhere The Software-as-a-Service model is extremely popular and getting the service you want without worrying about the underlying infrastructure gives freedom and speed of adoption, but what happens when a big company relies on multiple SaaS products to run their business? Sooner or later, they experience loss of control and higher costs in keeping a consistent view of the big picture. They need to deal with SaaS internal representations of their own data, multiple views of the same domain concept, unplanned expenses to export, and interpret and integrate the data from different sources with different formats. Putting it all together All the issues above fall into a well-known category of information technology. They are integration problems, and over the years, a lot of vendors promised a definitive solution. Now, you can consider low-code/no-code platforms with hundreds of ready-made connectors and modern graphical interfaces. Problem solved, right? Well, not really. Low-code integration platforms simplify implementation. They are really good at it, but doing so oversimplifies the real challenge: creating and maintaining a consistent set of APIs shaped around the business value over time, and preventing the interfaces from leaking internal complexities to the rest of the company, something that has to be defined and maintained through architectural choices and proper skills (completely hidden behind the selling points of such platforms). There are two different ways to solve integration problems: Centralized using adapters. In this case, the logic is pushed to the central orchestration component, with integration managed through a set of adapters. This is the rather old school SOA approach, the one that the majority of market integration platforms are built on. Decentralized, pushing the logic to the edges, giving autonomous teams the freedom to define both the boundaries and the APIs that a domain must expose to deliver business value. This is a more modern approach that has arisen recently alongside the rise of microservices and, in the analytical world, with the concept of data mesh. The former gives speed at the starting point and the illusion of reducing the number of choices and skills to manage the problems, but in the long run, inevitably, this begins to accumulate technical debt. Due to the lack of necessary degrees of freedom, you lose the ability to evolve the integration points over time, the same thing that caused the transition from SOA to microservices architectures. The latter needs the relevant skills, vision, and ability to execute but gives immediate results and allows you to flexibly manage the evolution of the enterprise architecture over time. Old problems, new solutions At Sourcesense in the last 20 years, we have partnered on hundreds of projects to bring agility, speed, and new open-source technology to our customers. Many times through the years, we were faced with the integration challenges above, and yes, we tried to solve them with the technology available at the time, so we have built some integration solutions on SOA (when they were the best of breed) and interacted with many of the integration platforms on the market. Then, we struggled with the issues and limitations of the integration landscape and have listened to our customers’ needs and where expectations have fallen short. The rise of agile methodologies, cloud computing, new techniques, technologies, and architectural styles has given an unprecedented boost to software evolution and the ability to support business needs, so we embraced the new wave and now have growing experience in solving problems with these tools. Along the way, we’ve seen a recurring pattern when we encountered integration problems, the effectiveness of data hubs as components of the enterprise architectures to solve these challenges, so we built one of our own: Joyce. Data hubs This is a relatively new term and refers to software platforms that collect data from different sources with the main purpose of distribution and sharing. Since this definition is broad and vague, let’s add some other key elements that matter and help define the contours of our implementation. Collecting data from different sources can bring three major benefits: Computational decoupling from the sources. Pulling (or pushing) the data out of the originating systems means that client applications and services interact with the hub and not directly with the sources, preventing them from being slowed down by additional traffic. Catalog and discoverability. If data is collected correctly, this leads to the creation of a catalog, allowing people inside the organization to search, discover, and use the data inside the hub. Security. The main purpose of the hubs is distribution and sharing. This leads immediately to focus on access control and security hardening. A single access point simplifies the overall security around the data because it significantly reduces the number of systems the clients have to interact with to gather the data they need. Joyce, how it works The cornerstone concept of Joyce is the schema. It allows you to shape the ingested data and how this data will be made available to client services. Using the same declarative approach made popular by Kubernetes, the schemas describe the expected result and the platform performs the actions to make it happen. Schemas are standard JSON schema files stored and classified in a catalog. Their definition falls into three categories: Input – how to gather and shape the source data. We leverage the Kafka Connect framework to provide ready-made connectors for a wide variety of sources. The ingested data can be filtered, formatted, and enriched with transformation handlers (domain-specific extensions of JSON schema). Model – allows you to create new aggregates from the data stored in the platform. This feature gives the freedom to model the data the way needed by client services. Export – bulk data export capability. Exported data can be any query run against the existing data with an optional temporal filter. Input and model data is made available to all the client services with the proper authorization grants through auto-generated REST and GraphQL APIs. It is also possible to subscribe to a dedicated topic if an event-driven approach is more suitable for the use-case. MongoDB: the key for a flexible model and performance at scale We heavily rely on MongoDB. Thanks to its flexibility, we can easily map any data structure the user defines to collect the data. Half of the schema definition is basically the definition of a MongoDB schema. (We also auto-generate one schema per collection to guarantee data integrity.) Joyce runs in a Kubernetes cluster and all its services are inherently stateless to exploit the full potential of horizontal scaling. The architecture is based on the CQRS pattern. This means that writes and reads are completely decoupled and can scale independently to meet the unique needs of the production environment. MongoDB is also the backing database of the API layer so we can keep the promise of low latency, high throughput, and continuous availability along all the components of the stack. The platform is available as a fully managed PaaS on the three major cloud providers (AWS, Azure, GCP) but if needed, it can be installed on an existing infrastructure (in cloud and on prem). Final considerations There are many challenges leaders must face for a successful digital transformation. They need to guide their organizations along a process that involves changes on many levels. The exponential growth of technological solutions in the last few years adds more complexity and confusion. The evolution of organizational models and methodologies point in the direction of shared responsibility, people empowerment, and autonomous teams with a light and effective central governance. The same evolution also permeates the novel approaches to enterprise architectures like the data mesh. Unfortunately, there’s no silver bullet, just the right choices for the given context. Despite all the marketing and hype around this or that one solution to all of your digital transformation needs, a long term successful shift needs guidance, competence and empowerment. We’ve built Joyce with the aim of reducing the burden of repetitive tasks and boilerplate code to get the results faster and catch the low hanging fruits without trying to replace the necessary architectural thinking to properly define the current state and the evolution of the enterprise architectures of our customers. If you’re struggling with the problems enlisted at the beginning of this article you should give Joyce a try. Learn more about Joyce

December 21, 2021
Developer

FHIR Technology is Driving Healthcare's Digital Revolution

Technology supporting healthcare’s digital transformation is so pervasive that the question isn’t what technology to choose, but rather, what problems need to be solved. Advancing technology and access to secure and real-time data analytics will vastly improve patients’ health and happiness, and growing interoperability standards are pushing organizations forward in their digital transformations. Together with the Healthcare Information and Management Systems Society (HIMSS) and leading healthcare insurance provider Humana , MongoDB recently released a three-part podcast series chronicling the ways Fast Healthcare Interoperability Resources (FHIR), AI, and the cloud are reshaping healthcare for the better. Here’s a quick roundup of our discussions. Data is the future of healthcare . Whether providers are driving patient engagement through wearable devices, wellness programs or connected care, data will take healthcare to the next digital frontier. We’ll see these advancements through AI, FHIR, and the cloud. FHIR is revolutionizing healthcare technology . Not only is FHIR implementation a requirement, it’s also a crossroads for data architects. Choosing the right approach has deep implications for healthcare IT. The operational data layer (ODL) approach to interoperability makes the impossible possible . Through Humana’s digital transformation journey, it became clear that meaningful progress isn’t possible using core legacy database systems. AI, FHIR, and the cloud: Why data is the future of healthcare In this episode , we dive into what a digital transformation would look like for the healthcare industry, and what are some of the biggest technology challenges facing healthcare today. A digitally transformed healthcare industry will weave real-time data analytics with more personalized care. Patients today want a more modern healthcare experience that includes telemedicine, digital forms and touchless mobile check ins. The end goal is simple: maximize the human experience while advancing away from legacy technology systems that slow down both healthcare practitioners and patients. When it comes to today’s biggest healthcare challenges, the cloud stands out as a key driver of promise and peril. The promise is that we can build applications, go to market and reach patients through wellness programs more quickly. The peril lies in the infrastructure, which is unknown to many healthcare organizations. This presents a unique challenge for the architects and certainly the developers at organizations with older legacy systems. The challenge here is avoiding a simple left hand shift or cloud for the sake of cloud, and moving from simple modernization to actual transformation. Listen below to hear the entire conversation Your browser does not support the audio element. Bring the FHIR inside for digital transformation In episode 2 , HIMSS and MongoDB take a closer look at why FHIR is a change agent in healthcare technology, and how healthcare organizations globally are using the new data standard to jump start legacy modernization and digital transformation. What is FHIR? The FHIR standard is a common set of schema definitions and APIs that helps providers and patients manage and exchange healthcare data. Using FHIR, records provided by healthcare organizations are standardized into a common data model over rest-based APIs. It makes the data that healthcare providers and payers use easier to exchange. Growing regulatory pressure has accelerated U.S. FHIR adoption among healthcare organizations and technology vendors.The Centers for Medicare and Medicaid Services (CMS) started a rolling deadline for FHIR compliance in 2020, with fines for institutions that fall behind. As a result, for most U.S.-based healthcare providers, payers, and their technology vendors, the past few years were a headlong race to adopt FHIR. Here are three reasons why FHIR is hugely significant for healthcare technology leaders: It’s a federal mandate from the Centers for Medicare & Medicaid Services. It’s a complex data integration challenge. Legacy systems built before the mid 2010s are not interoperable with the FHIR mandate. FHIR implementation approaches For large organizations with huge data requirements, data architects can experience paralysis from the sheer volume of legacy systems to unwind. These groups have all of their patients’ electronic healthcare record information, payer information and more bound up in legacy systems, none of which is interoperable with FHIR. The second challenge is cloud migration, which can be skirted by organizations using a checkbox compliance approach. In those cases, API layers are used to ingest and serve data to legacy systems, but are not really integrated with the legacy system in real time. The most successful approach to tackling this challenge is not to rewrite, unwind or replace legacy systems completely, but keep them contained. We recommend bringing in an operational data layer that exposes the information in the legacy system and keeps it in sync with the legacy system, but then lands it in an ODL in the FHIR standard. With the FHIR API, patients and providers can interact with data in real time and access records in milliseconds after a diagnosis. Real-time records synced with legacy systems and patients’ private data is protected. Delve into the full conversation below Your browser does not support the audio element. FHIR and the future of healthcare at Humana You don't have to take the rip and replace approach when modernizing your legacy systems with an ODL method. This was a key to successful modernization for Humana, as discussed in the third and final episode in our series. For large enterprises that may have decades’ worth of acquired legacy systems, often pulling similar datasets from disparate databases, the pursuit of modernized interoperability begins to look like an impossible task. Listen to the final episode of our podcast series to here how Humana’s ODL approach met the company’s data velocity requirements, and next steps for personalized healthcare and interoperability at Humana. Listen to the entire conversation below Your browser does not support the audio element. More related FHIR and healthcare resources [ White paper ] Bring the FHIR Inside: Digital Transformation Without the Rip and Replace [ On-demand webinar ] Building FHIR Applications with MongoDB

December 21, 2021
Developer

Introducing Pay as You Go MongoDB Atlas on AWS Marketplace

We’re excited to introduce a new way of paying for MongoDB Atlas . AWS customers can now pay Atlas charges via our new AWS Marketplace listing . Through this listing, individual developers can enjoy a simplified payment experience via their AWS accounts, while enterprises now have another way to procure MongoDB in addition to privately negotiated offers, already supported via AWS Marketplace. Previously, customers who wanted to pay via AWS Marketplace had to commit to a certain level of usage upfront. Pay as you go is available directly in Atlas via credit card, PayPal, and invoice — but not in AWS Marketplace, until today. With this new listing and integration, you can pay via AWS with no upfront commitments . Simply subscribe via AWS Marketplace and start using Atlas. You can get started for free with Atlas’s free-forever tier , then scale as needed. You’ll be charged in AWS only for the resources you use in Atlas, with no payment minimum. Deploy, scale, and tear down resources in Atlas as needed; you’ll pay just for the hours that you’re using them. Atlas comes with a Basic Support Plan via in-app chat. If you want to upgrade to another Atlas support plan , you can do so in Atlas. Usage and support costs will be billed together to your AWS account daily. If you’re connecting Atlas to applications running in AWS, or integrating with other AWS services , you’ll be able to see all your costs in one place in your AWS account. To get started with Atlas via AWS Marketplace, visit our Marketplace listing and subscribe using your account. You’ll then be prompted to either sign in to your existing Atlas account or sign up for a new Atlas account . Try MongoDB Atlas for Free Today!

December 15, 2021
Developer

10 Signs Your Data Architecture is Limiting Your Innovation: Part 2

With the massive amounts of data organizations now ingest, store, and analyze comes a massive responsibility to monitor, manage, and protect it. Unfortunately, many businesses are functioning with little insight into how their data is stored and who is accessing it — and their overly complex data architecture can turn those challenges into Frail security can lead to unnecessary risk, and if you are not in control of your data architecture, the next big compliance offender or data breach victim could be you. These risks — and the time and resources required to address them — make up part of a hidden tax on your innovation. We call it DIRT — the Data & Innovation Recurring Tax . Our experts have identified 10 symptoms that can indicate your business is paying DIRT — read about them all in our white paper 10 Signs Your Data Infrastructure is Holding You Back , and check out Part 1 of this blog series. Here, we highlight two signs of this innovation tax that are all about security. Symptom #3: That last big data breach — or the next one — is on you The more complex your data architecture, the more threat vectors you need to cover and the more complicated and time-consuming it becomes to maintain security. Each data store and application may have its own security framework and requirements — its own access controls, role definition, and login procedures. Each database may in turn be connected with multiple other technologies and vendors, further adding to the time and complexity needed to keep everything secure. That’s a drag on your team: Some 30% of IT managers spend more than 16 hours a month just on patching, and 14% spend more than 48 hours a month. Often, it’s impossible to keep up: 42% of breaches are the result of an attack for which a patch was available but not applied, according to a Ponemon Institute study of IT professionals. On average, 28% of vulnerabilities remain unaddressed. Our Solution: With an application data platform, you have one set of database and data service components that share the same developer experience and the same underlying operational and security characteristics, making it a lot easier to defend. Organizations can use a single overarching security policy and implementation without having to reinvent the wheel every time someone has a new use case for the data. Maintaining audit logs and access is dramatically streamlined. You get both security and speed. Symptom #4: Rampant data duplication makes compliance a nightmare In a modern organization, every part of the business should have access to the data and insights that help optimize performance and meet customer demand. But most data is trapped in silos, each with its own formats, access, and authorization controls. Attempts to address data silo issues often create their own web of separate niche data technologies, each trying to solve the problem. That can create a lot of data duplication — so even your IT leaders may not know who has copies of which data, or even how many copies there may be. That’s obviously a problem for security reasons. It also makes it extremely difficult to comply with regulations such as GDPR and the California Consumer Privacy Act, or to respond effectively to audits. How can you tell your regulators exactly where personally identifying information sits, or where it has been, when you don’t even know how many copies exist? Our Solution: Eliminate silos in the first place by using an application data platform, which addresses many of the use cases that would otherwise spur teams to duplicate data. And, with MongoDB, you can federate queries across multiple sources so you don’t have to move data into different formats. Our next installment will focus on your developers’ time, how it’s spent and the price you pay when they can’t find the time to develop and roll out best-in-class features. For a complete view of DIRT, read our white paper DIRT and the High Cost of Complexity .

December 15, 2021
Developer

MongoDB x Screaming in the Cloud: A Discussion

Held in Las Vegas every winter, AWS re:Invent features booths and exciting new demos from the biggest names in tech; a slate of fun, engaging activities; and inspirational keynotes by thought leaders and pioneers. Along with being one of today’s top tech expos, re:Invent is also the ideal venue for thinkers to meet and exchange ideas. At this year’s conference, Sahir Azam, Chief Product Officer at MongoDB, sat down with Corey Quinn, Chief Cloud Economist at Duckbill Group (and one of the most interesting men in tech), for a deep, wide-ranging conversation. Their chat covers everything from the state of databases today to the true definition of an application data platform. Read on for some highlights and listen to the episode here . How is MongoDB adapting to the cloud? Corey kicks off the talk with a big-picture question: How has MongoDB, a mainstay in the database world, evolved to match the rapidly changing demands of the market? Given the rapid proliferation of databases and related technologies, this question is especially timely. “What do you do these days?” Corey asks. “What is MongoDB in this ecosystem?” “Today, MongoDB has become one of the leading cloud database companies in the world,” Sahir replies. “The majority of [our] business comes from our cloud service. That’s our flagship product.” One database to rule them all? “[That] leads to the obvious question,” Corey continues. “What’s your take on the whole idea of a different database for every problem/customer/employee/API request?” “[Many] customers clearly moved to the cloud because they want to be able to move faster, innovate faster, be more competitive,” Sahir replies. Although it’s impossible for a single database vendor to address every customer need, Sahir also mentions that “cobbling together 15 different databases” forces teams to focus on troubleshooting instead of innovation. Instead, Sahir points out, the ideal database would fit “80% of [an organization’s] use cases, with niche technologies serving as specialized solutions for particular needs.” What is the nature of MongoDB's relationship with AWS? “You mentioned that you are a partner with AWS,” Corey asks. “But how do you address the idea of partnering with a company that also heavily advantages its own first-party services?” Sahir’s reply — that MongoDB has a complex, multifaceted relationship with AWS but not an adversarial one — cites the two companies’ mutual interests and partnerships. “The idea of working with major platform players...being a customer, a partner, and a competitor is something that any organization at our scale and size [has to] navigate,” Sahir explains. “Honestly, there’s a lot more collaboration, both on the engineering side and in the field. We jointly work with customers and get them onto our platform way more often than the world sees.” And much more... Corey and Sahir’s discussion also covers how international customers use MongoDB, how potential users and customers perceive MongoDB, and what’s in store for future MongoDB products. Check out the full podcast!

December 15, 2021
Developer

Ready to get Started with MongoDB Atlas?

Start Free