We’re excited to announce the general availability of the Atlas Kubernetes Operator, the best way to use MongoDB with Kubernetes.
The Atlas Kubernetes Operator makes it easy to deploy, manage, and access MongoDB Atlas from your preferred Kubernetes distribution. When the operator is installed into your Kubernetes environment, it exposes Kubernetes custom resources to fully manage projects, deployments (clusters and serverless instances), network access (IP Access Lists and Private Endpoints), database users, backup, and more. For a full list of capabilities, check out the Atlas Operator documentation.
The Atlas Operator is designed to Kubernetes standards. It’s open source and built with the CNCF Operator Framework, so you can have confidence that it will work with your Kubernetes environment. The Operator supports any Certified Kubernetes Distribution and is OpenShift-certified.
With the Operator, you can easily manage your Atlas resources directly from Kubernetes, using the Kubernetes API. This means no switching between systems: you can manage your containerized applications and the data layer powering them from a single control plane. This also makes it easy to integrate Atlas into your Kubernetes-native CI/CD pipelines, automatically setting up and tearing down infrastructure as part of your deployment process.
Why Kubernetes and MongoDB Atlas? Atlas is a multi-cloud document database that provides the versatility you need to build sophisticated and resilient applications. It has built-in high availability, is easily scalable, and is flexible enough to support rapid iteration and shipping of new application features. This makes it a great fit for the modern development and deployment practices that containerization and Kubernetes support. It’s also incredibly simple to deploy multi-cloud clusters or move between clouds on Atlas — a good match for the portability that containers provide.
Digital Underwriting: A Digital Transformation Wave in Insurance
Underwriting processes are at the core of insurance companies, and their effectiveness is directly related to insurers’ profitability and success. Despite this fact, underwriting is often one of the most underserved parts of the insurance industry from a technology perspective. There may be sophisticated policy, customer, and claim administration systems, but underwriters often find themselves wrangling data from a variety of sources, into spreadsheets, in order to adequately evaluate the financial risks that new applicants and scenarios might bring, and translate them into appropriate pricing and coverage decisions. Due to the complexity and variety of information and sources required to be accessed and integrated, modernized underwriting platforms have often been a difficult objective to achieve for many insurers. The cost and time associated with building such systems, and the possibility of minimal short-term return on investment, have also made it difficult for leaders to secure funding and support within their organizations. These factors have required underwriters to persist manual processes, which, at best, are often highly inefficient. At worst, they do not sufficiently position an insurer to be competitive in the digitally disrupted future of insurance delivery. It does not have to be this way, however. This blog post highlights ways in which insurance companies can leverage new technology, and incorporate modern architecture paradigms into their information systems, in order to revolutionize their underwriting workflows. The underwriting revolution Technology is changing the way organizations operate and measure risk. New technological advancements in the IoT, Manufacturing, and Automotive space, just to mention a few, are driving insurers to develop new underwriting paradigms personalized to each individual, and adjusted based on real-time data. This is already a reality, with some insurers leveraging personal wearable technology to assess the fitness level of clients and adjust life and health insurance premiums accordingly. We are only at the beginning; let’s explore what this might look like in 2030. Imagine a scenario , where a professional, living in a major urban area, orders a self-driving car through his digital assistant to get to a meeting. The assistant is directly linked to the user’s insurer, which allows the insurer to automatically calculate the best possible route taking into account the time required, past accident history, and current traffic conditions so that the likelihood of car damage and accidents is minimized. If the user decides to drive him or herself that day or picks a different route, the mobility premium will be set to increase based on real-time variables of the journey. The user’s mobility insurance can be linked to other services, such as a life insurance policy, which can also be subject to increase depending on the commute’s risk factors. We don’t have to wait for 2030, for a scenario like this to come to fruition. Thanks to advances in IoT devices, mobile computing, and deep learning techniques mimicking the human brain's perception, reasoning, learning, and problem-solving, many of these capabilities can be made a reality here in 2022. While the insurance industry continues to innovate, the underwriting process is under constant evolution as a result. Certainly, in the scenario described above, the Underwriting decision-making process has shifted from a spreadsheet-based, manual one, to one that is fully automated, with AI/ML decision support. The insurers who can achieve this will retain and gain a significant competitive advantage over the next decade. Technology can help streamline new cases Underwriters are notoriously faced with administrative complexity when managing any new case, regardless of the risk profile or level. In the commercial insurance space, agents and brokers are generally used as a bridge between the insurer and the insured. Email exchanges amongst parties are common, which can often lack sufficient detail, and require the underwriter to chase missing data in order to successfully close the sale and acquisition of new business. Issues with data quality, or lack of certain key pieces of information, can be addressed by implementing automated claim procedures leveraging Natural Language Processing (NLP), Optical Character Recognition (OCR), and rich text analysis to programmatically extract data from email and other forms of written communication, alert the agent in case of missing information, and even attempt to automatically enrich missing information in order to facilitate a close of the sale. What’s described above is only the beginning of what’s possible to achieve when we begin to think about what we can do to bolster and augment underwriting procedures within an insurer. Sanding off the rough edges by reducing manual procedures, and helping underwriters focus less on non-differentiating work, and more on high-value activities, can not only alleviate significant pain and frustration of the underwriter, but it can help grow the book of business, by offering more competitive pricing, products, and turn-around times. Triaging times can be drastically reduced Insurance providers seeking to grow their book of business, and expand the channels through which they sell, may have to deal with a surge of new coverage requests and changing risk scenarios. However, many insurers may be unprepared to handle such increases in new business intake volumes. Because of legacy systems, workflow, and resource bottlenecks, it’s possible that a significant uptick in new business could actually result in a negative outcome for the insurer, due to the inability to process it in a timely and efficient manner. Could you lose business to a competitor because it could not be underwritten in time? Augmenting traditional workflows with automation and Machine Learning algorithms can begin to address this challenge. How can you do more, without significantly burdening or expanding your underwriting team? Many insurers are beginning to automatically classify and route such increases in business demand, using AI/ML. A first step in the underwriting process, after initial intake and enrichment, is triaging, or deciding who can best underwrite the given request. Often, this is also a manual process, relying heavily on someone within the organization who knows how to best route the flow of work, based on the skills and experience of the underwriting staff. As with the ability to detect the need for, and enrich the initial submission intake, Machine Learning algorithms can also be leveraged to ease the burden, and reduce the human bottleneck of routing the intake work to the best suited underwriter. Risk assessment processes can be made more effective Once the intake of new cases has been automated and triaged, we need to think about how to streamline the risk assessment process. Does every single new business case need to be priced and adjusted by an actual underwriter? If we can triage and determine who should work on the new case, can we also then route some of the low-risk work to a fully-automated pricing and underwriting workflow? Can we begin to save the precious time of our underwriting staff for the higher-touch business and accounts that truly need their attention and expertise? Automated risk assessment has roots in rule-based expert systems dating back to the 1990s. These systems contained tens of thousands of hard-coded underwriting rules that could assess medical, occupational, and advocational risk. These systems became very complex over the years and still play an essential role in underwriting. ML algorithms can enhance the performance of these systems by fine-tuning underwriting rules and finding new patterns of risk information. The vast amount of data available to insurers can also be used to predict the risk of new cases and scenarios. Once the risk profile of a new case has been established, a pricing model can be applied to programmatically derive the policy cost and communicate it to the prospective client without involving the underwriting team, as imagined in the 2030 scenario we mentioned earlier in the article. Conclusion and follow-up There are plenty of digital transformation opportunities in the insurance industry. More specifically, focusing on underwriting will help new and existing players in the insurance industry gain a significant competitive advantage in the coming decade. Whether human-based or AI/ML augmented, underwriting decisions will be underpinned by an ever-growing variety and volume of complex data. In the next blog of the series, Riding the Transformation Wave with MongoDB , we’ll dive deeper into how MongoDB helps insurance innovators create, transform and disrupt the industry by unleashing the power of software and data. Stay tuned! Contact us to learn how MongoDB is helping insurance innovators create, transform, and disrupt the industry by unleashing the power of software and data.
Modernize your GraphQL APIs with MongoDB Atlas and AWS AppSync
Modern applications typically need data from a variety of data sources, which are frequently backed by different databases and fronted by a multitude of REST APIs. Consolidating the data into a single coherent API presents a significant challenge for application developers. GraphQL emerged as a leading data query and manipulation language to simplify consolidating various APIs. GraphQL provides a complete and understandable description of the data in your API, giving clients the power to ask for exactly what they need — while making it easier to evolve APIs over time. It complements popular development stacks like MEAN and MERN , aggregating data from multiple origins into a single source that applications can then easily interact with. MongoDB Atlas: A modern developer data platform MongoDB Atlas is a modern developer data platform with a fully managed cloud database at its core. It provides rich features like native time series collections, geospatial data, multi-level indexing, search, isolated workloads, and many more — all built on top of the flexible MongoDB document data model. MongoDB Atlas App Services help developers build apps, integrate services, and connect to their data by reducing operational overhead through features such as hosted Data API and GraphQL API. The Atlas Data API allows developers to easily integrate Atlas data into their cloud apps and services over HTTPS with a flexible, REST-like API layer. The Atlas GraphQL API lets developers access Atlas data from any standard GraphQL client with an API that generates based on your data’s schema. AWS AppSync: Serverless GrapghQL and pub/sub APIs AWS AppSync is an AWS managed service that allows developers to build GraphQL and Pub/Sub APIs. With AWS AppSync, developers can create APIs that access data from one or many sources and enable real-time interactions in their applications. The resulting APIs are serverless, automatically scale to meet the throughput and latency requirements of the most demanding applications, and charge only for requests to the API and by real-time messages delivered. Exposing your MongoDB Data over a scalable GraphQL API with AWS AppSync Together, AWS AppSync and MongoDB Atlas help developers create GraphQL APIs by integrating multiple REST APIs and data sources on AWS. This gives frontend developers a single GraphQL API data source to drive their applications. Compared to REST APIs, developers get flexibility in defining the structure of the data while reducing the payload size by bringing only the attributes that are required. Additionally, developers are able to take advantage of other AWS services such as Amazon Cognito, AWS Amplify, Amazon API Gateway, and AWS Lambda when building modern applications. This allows for a severless end-to-end architecture, which is backed by MongoDB Atlas serverless instances and available in pay-as-you-go mode from the AWS Marketplace . Paths to integration AWS AppSync uses data sources and resolvers to translate GraphQL requests and to retrieve data; for example, users can fetch MongoDB Atlas data using AppSync Direct Lambda Resolvers. Below, we explore two approaches to implementing Lambda Resolvers: using the Atlas Data API or connecting directly via MongoDB drivers . Using the Atlas Data API in a Direct Lambda Resolver With this approach, developers leverage the pre-created Atlas Data API when building a Direct Lambda Resolver. This ready-made API acts as a data source in the resolver, and supports popular authentication mechanisms based on API Keys, JWT, or email-password. This enables seamless integration with Amazon Cognito to manage customer identity and access. The Atlas Data API lets you read and write data in Atlas using standard HTTPS requests and comes with managed networking and connections, replacing your typical app server. Any runtime capable of making HTTPS calls is compatible with the API. Figure 1: Architecture details of Direct Lambda Resolver with Data API Figure 1 shows how AWS AppSync leverages the AWS Lambda Direct Resolver to connect to the MongoDB Atlas Data API. The Atlas Data API then interacts with your Atlas Cluster to retrieve and store the data. MongoDB driver-based Direct Lambda Resolver With this option, the Lambda Resolver connects to MongoDB Atlas directly via drivers , which are available in multiple programming languages and provide idiomatic access to MongoDB. MongoDB drivers support a rich set of functionality and options , including the MongoDB Query Language, write and read concerns, and more. Figure 2: Details the architecture of Direct Lambda Resolvers through native MongoDB drivers Figure 2 shows how the AWS AppSync endpoint leverages Lambda Resolvers to connect to MongoDB Atlas. The Lambda function uses a MongoDB driver to make a direct connection to the Atlas cluster, and to retrieve and store data. The table below summarizes the different resolver implementation approaches. Table 1: Feature comparison of resolver implementations Setup Atlas Cluster Set up a free cluster in MongoDB Atlas. Configure the database for network security and access. Set up the Data API. Secrect Manager Create the AWS Secret Manager to securely store database credentials. Lambda Function Create Lambda functions with the MongoDB Data APIs or MongoDB drivers as shown in this Github tutorial . AWS AppSync setup Set up AWS Appsync to configure the data source and query. Test API Test the AWS AppSync APIs using the AWS Console or Postman . Figure 3: Test results for the AWS AppSync query Conclusion To learn more, refer to the AppSync Atlas Integration GitHub repository for step-by-step instructions and sample code. This solution can be extended to AWS Amplify for building mobile applications. For further information, please contact email@example.com .