Silvio Sola

2 results

MACH Aligned for Retail: API-First

Retailers must constantly evolve to meet growing customer expectations and remain competitive. Both their internal- and external-facing applications must be developed using principles that promote agility and innovation, moving away from siloed architectures. As discussed in the first article of this series , the MACH Alliance promotes the development of modern applications through open tech ecosystems. MACH is an acronym that represents Microservices, API-first, Cloud-native SaaS, and Headless. MongoDB is a proud member of the Alliance, providing retailers with the tools to build highly flexible and scalable applications. This is the second in a series of blog posts focused on MACH and how retail organizations can leverage this framework to gain a competitive advantage. In this article, we’ll discuss concepts relating to the second letter of MACH: API-first. Read the first post in this series, "MACH Aligned for Retail: Microservices." What is an API-first approach and why is it important? An application programming interface (API) is a set of routines, protocols, and tools that allow applications, or services within a microservices architecture, to talk to each other. APIs can be seen as messengers that deliver requests and responses. Applications built around APIs are said to be API-first. With this approach, the design and development of APIs come before the software implementation. Typically, an interface is created that is used to host and develop the API. The development team will then leverage the interface to build the rest of the application. This methodology enables developers to have access to specific functionalities of external applications or other microservices within the same application, depending on their needs. It promotes reusability because functionalities are interoperable with mobile and other client applications. In addition, applications developed with an API layer in mind can adapt to new requirements more easily because additional services and automation can be integrated into production when new requirements arise, therefore remaining competitive for longer. An API-first approach to developing applications The role of API-first in retail APIs play a crucial role in deeply interconnected systems that need to interface with other internal applications, third-party partners, and customers — all key areas when it comes to developing powerful retail applications. Think about how an e-commerce platform connects to the different systems making up the purchase process, such as inventory management, checkout, payment processing, shipping, and loyalty programs. The use of APIs is deeply interlinked with the concept of microservices . Software and data need to be decoupled to enable retailers to meet ever-increasing requirements, including omnichannel and cross-platform integration, seamless experiences across physical and online stores, and the ability to leverage real-time capabilities that enable differentiating features, such as live inventory updates and real-time analytics. APIs can be seen as a bridge for loosely coupled microservices to communicate with each other. Besides enabling a microservices architecture, an API-first approach offers the following additional benefits: Avoid duplication of efforts and accelerate time to market . Developers can work on multiple frontends at the same time, being confident that functionalities can be integrated by embedding the same APIs once ready. Think of multiple development teams working on an e-commerce web application, mobile portal, and internal inventory management system all at the same time. An API enabling the placement of a new order can be seamlessly leveraged by the web and mobile application and fed into the inventory management system to aid warehouse workers. Bug-fixing and feature enhancements can happen simultaneously, avoiding duplication of efforts and allowing new capabilities to be released to market more quickly. Reduce risks and operating costs . An API-first approach enables system stability and interoperability from the beginning because API efficiency is placed at the center of the development lifecycle and is no longer an afterthought once the application or functionality has been developed. This approach reduces the risk for retailers and saves money and effort in troubleshooting unstable systems. Enable new opportunities and scale faster . A flexible approach revolving around APIs provides more opportunities when it comes to integrating and refactoring the way different client applications and microservices communicate with each other, allowing retailers to improve and scale their IT offering in a fraction of the time. This approach also changes the way retailers can interact with external partners and do business with them since they can be provided with the tools to easily integrate with the retailer’s offering. Achieve language flexibility . Effective retailers need to have the capability to adapt their digital offering to different regions and languages. The plug-in capabilities of API-first allow developers to offer language-agnostic solutions that different microservices can integrate with, leveraging region-specific frontends. Steps to an API-first application What is the alternative? The four MACH Alliance principles combined (Microservices, API-first, Cloud-native SaaS, Headless) act as a disrupting force compared to the way applications were built until recently. Adapting to a new technology paradigm requires effort and a different developer mindset. But what was there before? From an API-first perspective, it can be said that the opposite is code-first. With this approach, application development starts in the integrated development environment (IDE), in which code is written and the software takes shape. Development teams know that they will need to build an interface to be able to interact with each function of the code, but it is seldom a priority; developing core functionalities takes precedence over the interface where those functionalities will be hosted and accessed. When the time comes for the interface to be developed, the code has already been defined. This means the API is developed around existing code rather than vice versa, which poses limitations. For example, developers might not be able to return data the way they want because of the underlying data schema. The code-first approach Bottlenecks can also occur as other teams requiring the API will need to wait until the code is finalized to be able to embed it in their underlying applications. Any delays in the software development lifecycle will hold them up and delay progress. Although a code-first approach might have worked in the past, it is no longer suitable for dealing with highly interconnected applications. Learn more about how MongoDB and MACH are changing the game for ecommerce. How MongoDB helps achieve an API-first approach Simply lifting and shifting monolithic applications to a microservice and API-first architecture will only provide minimal benefits if they are still supported by a relational data layer. This is where most of the bottlenecks occur. Changes to application functionalities will require constant refactoring of the database schemas, object-relational mapping (ORM), and refining at the microservice level. Moving to a modern MACH architecture requires a modern data platform that removes data silos. The MongoDB developer data platform provides a flexible data model, along with automation and scalability features to adapt to even the most challenging retail use cases and to multiple platforms (e.g., on-premises, cloud, mobile, and web applications). MongoDB Atlas, MongoDB’s fully managed cloud database, also provides capabilities to manage the data layer end to end via APIs, such as the MongoDB Atlas Data API . This is a REST-like, resilient API for accessing all Atlas data that enables CRUD operations and aggregations with instantly generated endpoints. This is a perfect answer to an API-first approach, since developers can access their data using the same principles leveraged to connect to other applications and services. The MongoDB Atlas Data API workflow MongoDB’s Atlas Data API provides several other benefits, allowing developers to: Build faster with developer-friendly data access. Developers work with a familiar, REST-like query and response format, no client-side drivers are necessary. Scale confidently with a resilient, fully managed API that reduces the operational complexity needed to start reading and writing your data. Integrate your MongoDB Atlas data seamlessly into any part of your stack — from microservices to analytics workloads. This article has provided only a sample of what can be leveraged via MongoDB’s APIs. The MongoDB Query API provides a comprehensive set of features to seamlessly work with data in a native, familiar way. It supports multiple index types, geospatial data, materialized views, full-text search, and much more. In the next part in this MongoDB and MACH Alliance series, we will discuss how a cloud-native SaaS architecture can enable full application flexibility and scalability. Read the first post in this series, "MACH Aligned for Retail: Microservices."

June 24, 2022

Digital Underwriting: A Digital Transformation Wave in Insurance

Underwriting processes are at the core of insurance companies, and their effectiveness is directly related to insurers’ profitability and success. Despite this fact, underwriting is often one of the most underserved parts of the insurance industry from a technology perspective. There may be sophisticated policy, customer, and claim administration systems, but underwriters often find themselves wrangling data from a variety of sources, into spreadsheets, in order to adequately evaluate the financial risks that new applicants and scenarios might bring, and translate them into appropriate pricing and coverage decisions. Due to the complexity and variety of information and sources required to be accessed and integrated, modernized underwriting platforms have often been a difficult objective to achieve for many insurers. The cost and time associated with building such systems, and the possibility of minimal short-term return on investment, have also made it difficult for leaders to secure funding and support within their organizations. These factors have required underwriters to persist manual processes, which, at best, are often highly inefficient. At worst, they do not sufficiently position an insurer to be competitive in the digitally disrupted future of insurance delivery. It does not have to be this way, however. This blog post highlights ways in which insurance companies can leverage new technology, and incorporate modern architecture paradigms into their information systems, in order to revolutionize their underwriting workflows. The underwriting revolution Technology is changing the way organizations operate and measure risk. New technological advancements in the IoT, Manufacturing, and Automotive space, just to mention a few, are driving insurers to develop new underwriting paradigms personalized to each individual, and adjusted based on real-time data. This is already a reality, with some insurers leveraging personal wearable technology to assess the fitness level of clients and adjust life and health insurance premiums accordingly. We are only at the beginning; let’s explore what this might look like in 2030. Imagine a scenario , where a professional, living in a major urban area, orders a self-driving car through his digital assistant to get to a meeting. The assistant is directly linked to the user’s insurer, which allows the insurer to automatically calculate the best possible route taking into account the time required, past accident history, and current traffic conditions so that the likelihood of car damage and accidents is minimized. If the user decides to drive him or herself that day or picks a different route, the mobility premium will be set to increase based on real-time variables of the journey. The user’s mobility insurance can be linked to other services, such as a life insurance policy, which can also be subject to increase depending on the commute’s risk factors. We don’t have to wait for 2030, for a scenario like this to come to fruition. Thanks to advances in IoT devices, mobile computing, and deep learning techniques mimicking the human brain's perception, reasoning, learning, and problem-solving, many of these capabilities can be made a reality here in 2022. While the insurance industry continues to innovate, the underwriting process is under constant evolution as a result. Certainly, in the scenario described above, the Underwriting decision-making process has shifted from a spreadsheet-based, manual one, to one that is fully automated, with AI/ML decision support. The insurers who can achieve this will retain and gain a significant competitive advantage over the next decade. Technology can help streamline new cases Underwriters are notoriously faced with administrative complexity when managing any new case, regardless of the risk profile or level. In the commercial insurance space, agents and brokers are generally used as a bridge between the insurer and the insured. Email exchanges amongst parties are common, which can often lack sufficient detail, and require the underwriter to chase missing data in order to successfully close the sale and acquisition of new business. Issues with data quality, or lack of certain key pieces of information, can be addressed by implementing automated claim procedures leveraging Natural Language Processing (NLP), Optical Character Recognition (OCR), and rich text analysis to programmatically extract data from email and other forms of written communication, alert the agent in case of missing information, and even attempt to automatically enrich missing information in order to facilitate a close of the sale. What’s described above is only the beginning of what’s possible to achieve when we begin to think about what we can do to bolster and augment underwriting procedures within an insurer. Sanding off the rough edges by reducing manual procedures, and helping underwriters focus less on non-differentiating work, and more on high-value activities, can not only alleviate significant pain and frustration of the underwriter, but it can help grow the book of business, by offering more competitive pricing, products, and turn-around times. Triaging times can be drastically reduced Insurance providers seeking to grow their book of business, and expand the channels through which they sell, may have to deal with a surge of new coverage requests and changing risk scenarios. However, many insurers may be unprepared to handle such increases in new business intake volumes. Because of legacy systems, workflow, and resource bottlenecks, it’s possible that a significant uptick in new business could actually result in a negative outcome for the insurer, due to the inability to process it in a timely and efficient manner. Could you lose business to a competitor because it could not be underwritten in time? Augmenting traditional workflows with automation and Machine Learning algorithms can begin to address this challenge. How can you do more, without significantly burdening or expanding your underwriting team? Many insurers are beginning to automatically classify and route such increases in business demand, using AI/ML. A first step in the underwriting process, after initial intake and enrichment, is triaging, or deciding who can best underwrite the given request. Often, this is also a manual process, relying heavily on someone within the organization who knows how to best route the flow of work, based on the skills and experience of the underwriting staff. As with the ability to detect the need for, and enrich the initial submission intake, Machine Learning algorithms can also be leveraged to ease the burden, and reduce the human bottleneck of routing the intake work to the best suited underwriter. Risk assessment processes can be made more effective Once the intake of new cases has been automated and triaged, we need to think about how to streamline the risk assessment process. Does every single new business case need to be priced and adjusted by an actual underwriter? If we can triage and determine who should work on the new case, can we also then route some of the low-risk work to a fully-automated pricing and underwriting workflow? Can we begin to save the precious time of our underwriting staff for the higher-touch business and accounts that truly need their attention and expertise? Automated risk assessment has roots in rule-based expert systems dating back to the 1990s. These systems contained tens of thousands of hard-coded underwriting rules that could assess medical, occupational, and advocational risk. These systems became very complex over the years and still play an essential role in underwriting. ML algorithms can enhance the performance of these systems by fine-tuning underwriting rules and finding new patterns of risk information. The vast amount of data available to insurers can also be used to predict the risk of new cases and scenarios. Once the risk profile of a new case has been established, a pricing model can be applied to programmatically derive the policy cost and communicate it to the prospective client without involving the underwriting team, as imagined in the 2030 scenario we mentioned earlier in the article. Conclusion and follow-up There are plenty of digital transformation opportunities in the insurance industry. More specifically, focusing on underwriting will help new and existing players in the insurance industry gain a significant competitive advantage in the coming decade. Whether human-based or AI/ML augmented, underwriting decisions will be underpinned by an ever-growing variety and volume of complex data. In the next blog of the series, Riding the Transformation Wave with MongoDB , we’ll dive deeper into how MongoDB helps insurance innovators create, transform and disrupt the industry by unleashing the power of software and data. Stay tuned! Contact us to learn how MongoDB is helping insurance innovators create, transform, and disrupt the industry by unleashing the power of software and data.

June 2, 2022