Luca Napoli

11 results

AI-Powered Call Centers: A New Era of Customer Service

Customer satisfaction is critical for insurance companies. Studies have shown that companies with superior customer experiences consistently outperform their peers. In fact, McKinsey found that life and property/casualty insurers with superior customer experiences saw a significant 20% and 65% increase in Total Shareholder Return , respectively, over five years. A satisfied customer is a loyal customer. They are 80% more likely to renew their policies, directly contributing to sustainable growth. However, one major challenge faced by many insurance companies is the inefficiency of their call centers. Agents often struggle to quickly locate and deliver accurate information to customers, leading to frustration and dissatisfaction. This article explores how Dataworkz and MongoDB can transform call center operations. By converting call recordings into searchable vectors (numerical representations of data points in a multi-dimensional space), businesses can quickly access relevant information and improve customer service. We'll dig into how the integration of Amazon Transcribe, Cohere, and MongoDB Atlas Vector Search—as well as Dataworkz's RAG-as-a-service platform— is achieving this transformation. From call recordings to vectors: A data-driven approach Customer service interactions are goldmines of valuable insights. By analyzing call recordings, we can identify successful resolution strategies and uncover frequently asked questions. In turn, by making this information—which is often buried in audio files— accessible to agents, they can give customers faster and more accurate assistance. However, the vast volume and unstructured nature of these audio files make it challenging to extract actionable information efficiently. To address this challenge, we propose a pipeline that leverages AI and analytics to transform raw audio recordings into vectors as shown in Figure 1: Storage of raw audio files: Past call recordings are stored in their original audio format Processing of the audio files with AI and analytics services (such as Amazon Transcribe Call Analytics ): speech-to-text conversion, summarization of content, and vectorization Storage of vectors and metadata: The generated vectors and associated metadata (e.g., call timestamps, agent information) are stored in an operational data store Figure 1: Customer service call insight extraction and vectorization flow Once the data is stored in vector format within the operational data store, it becomes accessible for real-time applications. This data can be consumed directly through vector search or integrated into a retrieval-augmented generation (RAG) architecture, a technique that combines the capabilities of large language models (LLMs) with external knowledge sources to generate more accurate and informative outputs. Introducing Dataworkz: Simplifying RAG implementation Building RAG pipelines can be cumbersome and time-consuming for developers who must learn yet another stack of technologies. Especially in this initial phase, where companies want to experiment and move fast, it is essential to leverage tools that allow us to abstract complexity and don’t require deep knowledge of each component in order to experiment with and realize the benefits of RAG quickly. Dataworkz offers a powerful and composable RAG-as-a-service platform that streamlines the process of building RAG applications for enterprises. To operationalize RAG effectively, organizations need to master five key capabilities: ETL for LLMs: Dataworkz connects with diverse data sources and formats, transforming the data to make it ready for consumption by generative AI applications. Indexing: The platform breaks down data into smaller chunks and creates embeddings that capture semantics, storing them in a vector database. Retrieval: Dataworkz ensures the retrieval of accurate information in response to user queries, a critical part of the RAG process. Synthesis: The retrieved information is then used to build the context for a foundational model, generating responses grounded in reality. Monitoring: With many moving parts in the RAG system, Dataworkz provides robust monitoring capabilities essential for production use cases. Dataworkz's intuitive point-and-click interface (as seen in Video 1) simplifies RAG implementation, allowing enterprises to quickly operationalize AI applications. The platform offers flexibility and choice in data connectors, embedding models, vector stores, and language models. Additionally, tools like A/B testing ensure the quality and reliability of generated responses. This combination of ease of use, optionality, and quality assurance is a key tenet of Dataworkz's "RAG as a Service" offering. Diving deeper: System architecture and functionalities Now that we’ve looked at the components of the pre-processing pipeline, let’s explore the proposed real-time system architecture in detail. It comprises the following modules and functions (see Figure 2): Amazon Transcribe , which receives the audio coming from the customer’s phone and converts it into text. Cohere ’s embedding model, served through Amazon Bedrock , vectorizes the text coming from Transcribe. MongoDB Atlas Vector Search receives the query vector and returns a document that contains the most semantically similar FAQ in the database. Figure 2: System architecture and modules Here are a couple of FAQs we used for the demo: Q: “Can you explain the different types of coverage available for my home insurance?” A: “Home insurance typically includes coverage for the structure of your home, your personal belongings, liability protection, and additional living expenses in case you need to temporarily relocate. I can provide more detailed information on each type if you'd like.” Q: “What is the process for adding a new driver to my auto insurance policy?" A: “To add a new driver to your auto insurance policy, I'll need some details about the driver, such as their name, date of birth, and driver's license number. We can add them to your policy over the phone, or you can do it through our online portal.” Note that the question is reported just for reference, and it’s not used for retrieval. The actual question is provided by the user through the voice interface and then matched in real-time with the answers in the database using Vector Search. This information is finally presented to the customer service operator in text form (see Fig. 3). The proposed architecture is simple but very powerful, easy to implement, and effective. Moreover, it can serve as a foundation for more advanced use cases that require complex interactions, such as agentic workflows , and iterative and multi-step processes that combine LLMs and hybrid search to complete sophisticated tasks. Figure 3: App interface, displaying what has been asked by the customer (left) and how the information is presented to the customer service operator (right) This solution not only impacts human operator workflows but can also underpin chatbots and voicebots, enabling them to provide more relevant and contextual customer responses. Building a better future for customer service By seamlessly integrating analytical and operational data streams, insurance companies can significantly enhance both operational efficiency and customer satisfaction. Our system empowers businesses to optimize staffing, accelerate inquiry resolution, and deliver superior customer service through data-driven, real-time insights. To embark on your own customer service transformation, explore our GitHub repository and take advantage of the Dataworkz free tier .

November 27, 2024

Empower Innovation in Insurance with MongoDB and Informatica

For insurance companies, determining the right technology investments can be difficult, especially in today's climate where technology options are abundant but their future is uncertain. As is the case with many large insurers, there is a need to consolidate complex and overlapping technology portfolios. At the same time, insurers want to make strategic, future-proof investments to maximize their IT expenditures. What does the future hold, however? Enter scenario planning. Using the art of scenario planning, we can find some constants in a sea of uncertain variables, and we can more wisely steer the organization when it comes to technology choices. Consider the following scenarios: Regulatory disruption: A sudden regulatory change forces re-evaluation of an entire market or offering. Market disruption: Vendor and industry alliances and partnerships create disruption and opportunity. Tech disruption: A new CTO directs a shift in the organization's cloud and AI investments, aligning with a revised business strategy. What if you knew that one of these three scenarios was going to play itself out in your company but weren’t sure which one? How would you invest now to prepare for one of the three? At the same time that insurers are grappling with technology choices, they’re also facing clashing priorities: Running the enterprise: supporting business imperatives and maintaining health and security of systems. Innovating with AI: maintaining a competitive position by investing in AI technologies. Optimizing spend: minimizing technology sprawl, technical debt, and maximizing business outcomes. Data modernization What is the common thread among all these plausible future scenarios? How can insurers apply scenario planning principles while bringing diverging forces into alignment? There is one constant in each scenario, and that’s the organization’s data—if it’s hard to work with, any future scenario will be burdened by this fact. One of the most critical strategic investments an organization can make is to ensure data is easy to work with. Today, we refer to this as data modernization, which involves removing the friction that manifests itself in data processing, ensuring data is current, secure, and adaptable. For developers, who are closest to the data, this means enabling them with a seamless and fully integrated developer data platform along with a flexible data model. In the past, data models and databases would remain unchanged for long periods. Today, this approach is outdated. Consolidation creates a data model problem, resulting in a portfolio with relational, hierarchical, and file-based data models—or, worst of all, a combination of all three. Add to this the increased complexity that comes with relational models, including supertype-subtype conditional joins and numerous data objects, and you can see how organizations wind up with a patchwork of data models and overly complicated data architecture. A document database, like MongoDB Atlas , stores data in documents and is often referred to as a non-relational (or NoSQL) database. The document model offers a variety of advantages and specifically excels in data consolidation and agility: Serves as the superset of all other data model types (relational, hierarchical, file-based, etc.) Consolidates data assets into elegant single-views, capable of accommodating any data structure, format, or source Supports agile development, allowing for quick incorporation of new and existing data Eliminates the lengthy change cycles associated with rigid, single-schema relational approaches Makes data easier to work with, promoting faster application development By adopting the document model, insurers can streamline their data operations, making their technology investments more efficient and future-proof. The challenges of making data easier to work with include data quality. One significant hurdle insurers continue to face is the lack of a unified view of customers, products, and suppliers across various applications and regions. Data is often scattered across multiple systems and sources, leading to discrepancies and fragmented information. Even with centralized data, inconsistencies may persist, hindering the creation of a single, reliable record. For insurers to drive better reporting, analytics, and AI, there's a need for a shared data source that is accurate, complete, and up-to-date. Centralized data is not enough; it must be managed, reconciled, standardized, cleansed, and enriched to maintain its integrity for decision-making. Mastering data management across countless applications and sources is complex and time-consuming. Success in master data management (MDM) requires business commitment and a suite of tools for data profiling, quality, and integration. Aligning these tools with business use cases is essential to extract the full value from MDM solutions, although the process can be lengthy. Informatica’s MDM solution and MongoDB Informatica’s MDM solution has been developed to answer the key questions organizations face when working with their customer data: “How do I get a 360-degree view of my customer, partner and & supplier data?” “How do I make sure that my data is of the highest quality?” The Informatica MDM platform helps ensure that organizations around the world can confidently use their data and make business decisions based on it. Informatica’s entire MDM solution is built on MongoDB Atlas , including its AI engine, Claire. Figure 1: Everything you need to modernize the practice of master data management. Informatica MDM solves the following challenges: Consolidates data from overlapping and conflicting data sources. Identifies data quality issues and cleanses data. Provides governance and traceability of data to ensure transparency and trust. Insurance companies typically have several claim systems that they’ve amassed over the years through acquisitions, with each one containing customer data. The ability to relate that data together and ensure it’s of the highest quality enables insurers to overcome data challenges. MDM capabilities are essential for insurers who want to make informed decisions based on accurate and complete data. Below are some of the different use cases for MDM: Modernize legacy systems and processes (e.g. claims or underwriting) by effectively collecting, storing, organizing, and maintaining critical data Improve data security and improve fraud detection and prevention Effective customer data management for omni-channel engagement and cross- or up-sell Data management for compliance, avoiding or predicting in advance any possible regulatory issues Given we already leverage the performance and scale of MongoDB Atlas within our cloud-native MDM SaaS solution and share a common focus on high-value, industry solutions, this partnership was a natural next step. Now, as a strategic MDM partner of MongoDB, we can help customers rapidly consolidate and sunset multiple legacy applications for cloud-native ones built on a trusted data foundation that fuels their mission-critical use cases. Rik Tamm-Daniels, VP of Strategic Ecosystems and Technology at Informatica Taking the next step For insurance companies navigating the complexities of modern technology and data management, MDM combined with powerful tools like MongoDB and Informatica provide a strategic advantage. As insurers face an uncertain future with potential regulatory, market, and technological disruptions, investing in a robust data infrastructure becomes essential. MDM ensures that insurers can consolidate and cleanse their data, enabling accurate, trustworthy insights for decision-making. By embracing data modernization and the flexibility of document databases like MongoDB, insurers can future-proof their operations, streamline their technology portfolios, and remain agile in an ever-changing landscape. Informatica’s MDM solution, underpinned by MongoDB Atlas, offers the tools needed to master data across disparate systems, ensuring high-quality, integrated data that drives better reporting, analytics, and AI capabilities. If you would like to discover more about how MongoDB and Informatica can help you on your modernization journey, take a look at the following resources: Unify data across the enterprise for a contextual 360-degree view and AI-powered insights with Informatica’s MDM solution Automating digital underwriting with machine learning Claim management using LLMs and vector search for RAG

October 22, 2024

Unlock PDF Search in Insurance with MongoDB & Superduper.io

As industries go, the insurance industry is particularly document-driven. Insurance professionals, including claim adjusters and underwriters, spend considerable time handling documentation with a significant portion of their workday consumed by paperwork and administrative tasks. This makes solutions that speed up the process of reviewing documents all the more important. Retrieval-augmented generation (RAG) applications are a game-changer for insurance companies, enabling them to harness the power of unstructured data while promoting accessibility and flexibility. This is especially true for PDFs, which despite their prevalence are difficult to search, leading claim adjusters and underwriters to spend hours reviewing contracts, claims, and guidelines in this common format. By combining MongoDB and Superduper.io you can build a RAG-powered system for PDF search, thus bringing efficiency and accuracy to this cumbersome task. With a PDF search application, users can simply type a question in natural language and the app will sift through company data, provide an answer, summarize the content of the documents, and indicate the source of the information, including the page and paragraph where it was found. In this blog, we will dive into the architecture of how this PDF search application can be created and what it looks like in practice. Why should insurance companies care about PDF Search? Insurance firms rely heavily on data processing. To make investment decisions or handle claims, they leverage vast amounts of data, mostly unstructured. As previously mentioned, underwriters and claim adjusters need to comb through numerous pages of guidelines, contracts, and reports, typically in PDF format. Manually finding and reviewing every piece of information is time-consuming and can easily lead to expensive mistakes, such as incorrect risk estimations. Quickly finding and accessing relevant content is key. Combining Atlas Vector Search and LLMs to build RAG apps can directly impact the bottom line of an insurance company. Behind the scenes: System architecture and flow As mentioned, MongoDB and Superduper.io underpin our information retrieval system. Let’s break down the process of building it: The user adds the PDFs that need to be searched. A script scans them, creates the chunks, and vectorizes them (see Figure 1). The chunking step is carried out using a sliding window methodology, which ensures that potentially important transitional data between chunks is not lost, helping to preserve continuity of context. Vectors and chunk metadata are stored in MongoDB, and an Atlas Vector Search index is created (see Figure 3). The PDFs are now ready to be queried. The user selects a customer, asks a question, and the system returns an answer, where it was found and highlights the section with a red frame (see Figure 3). Figure 1: PDF chunking, embedding creation, and storage orchestrated with Superduper.io Each customer has a guidelines PDF associated with their account based on their residency. When the user selects a customer and asks a question, the system runs a Vector Search query on that particular document, seamlessly filtering out the non-relevant ones. This is made possible by the pre-filtering field included in the search query. Atlas Vector Search also takes advantage of MongoDB’s new Search Nodes dedicated architecture, enabling better optimization for the right level of resourcing for specific workload needs. Search Nodes provide dedicated infrastructure for Atlas Search and Vector Search workloads, allowing you to optimize your compute resources and fully scale your search needs independent of the database. Search Nodes provide better performance at scale, delivering workload isolation, higher availability, and the ability to optimize resource usage. Figure 2: PDF querying flow, orchestrated with Superduper.io Superduper.io Superduper.io is an open-source Python framework for integrating AI models and workflows directly with and across major databases for more flexible and scalable custom enterprise AI solutions. It enables developers to build, deploy, and manage AI on their existing data infrastructure and data, while using their preferred tools, eliminating data migration and duplication. With Superduper.io, developers can: Bring AI to their databases, eliminate data pipelines and moving data, and minimize engineering efforts, time to production, and computation resources. Implement AI workflows with any open and closed source AI models and APIs, on any type of data, with any AI and Python framework, package, class or function. Safeguard their data by switching from APIs to hosting and fine-tuning your own models, on your own existing infrastructure, whether on-premises or in the cloud. Easily switch between embedding models and LLMs, to other API providers as well as hosting your own models, on HuggingFace, or elsewhere just by changing a small configuration. Build next-generation AI apps on your existing database Superduper.io provides an array of sample use cases and notebooks that developers can use to get started, including vector search with MongoDB, embedding generation, multimodal search, retrieval-augmented generation (RAG), transfer learning, and many more. The demo showcased in this post is adapted from an app previously developed by Superduper.io. Let's put it into practice To show you how this could work in practice, let’s look at, an underwriter handling a specific case. The underwriter is seeking to identify the risk control measures as shown in Figure 3 below but needs to look through documentation. Analyzing the guidelines PDF associated with a specific customer helps determine the loss in the event of an accident or the new premium in the case of a policy renewal. The app assists by answering questions and displaying relevant sections of the document. Figure 3: Screenshot of the UI of the application, showing the question asked, the LLM’s answer, and the reference document where the information is found By integrating MongoDB and Superduper.io, you can create a RAG-powered system for efficient and accurate PDF search. This application allows users to type questions in natural language, enabling the app to search through company data, provide answers, summarize document content, and pinpoint the exact source of the information, including the specific page and paragraph. If you would like to learn more about Vector Search powered apps and Superduper.io, visit the following resources: PDF Search in Insurance Github repository Search PDFs at Scale with MongoDB and Nomic Superduper.io Github, includes notebooks and examples

June 24, 2024

Search PDFs at Scale with MongoDB and Nomic

Data is only valuable if it’s accessible. For example, storing photos, audio files, or PDFs without the ability to extract information from them is like keeping junk in your basement, thinking you might need it someday. The problem is finding what you need to dig through your junk when the day comes. Until now, companies have followed a similar approach to unstructured data : store everything in data lakes for future use. But whether it’s junk in a basement or data in a data lake, the result is the same: accessibility is hard or impossible. However, the latest advancements in AI have disrupted this status quo. AI can effectively and efficiently compare similar objects by generating a vector representation or embedding a data object. This capability has revolutionized industries by enabling faster and more precise search, categorization, and recommendation systems than ever before. Whether it's being used to compare text, documents, images, or complex patterns in data, embeddings allow for nuanced interpretations and connections that were impossible with traditional methods. By taking advantage of AI, users can uncover insights and make unprecedented speed and accuracy decisions. A particularly interesting use case is PDF search, since every company in the world deals with PDFs in one way or another. While PDFs allow portability across platforms and operating systems, most PDF readers only allow for basic exact-match queries. Check out our AI resource page to learn more about building AI-powered apps with MongoDB. PDF search powered by MongoDB and Nomic Enter MongoDB and Nomic: MongoDB Atlas Vector Search with Nomic Embed equips organizations with a powerful and affordable AI-powered search solution for large PDF collections. A machine learning company specializing in explainable and accessible AI, Nomic Embed is the company’s flagship text embedding model with out-of-the-box features suitable for scalable PDF search. Its features include: Long context: Nomic Embed breaks new ground by supporting a long context length of 8192 tokens, exceeding the standard 2048. This extended context makes the model ideal for real-world applications that involve processing large PDFs and documents. High throughput: While achieving top performance on the MTEB embedding benchmark, Nomic Embed is smaller than similarly performing models. At only 137 million parameters and 548MB, Nomic Embed enables high-throughput embedding generation for data-heavy workflows or streaming applications. Flexible storage: Nomic Embed provides adjustable embedding size via Matryoshka representation learning. Users can freely choose to store the first 64, 128, 256, or 512 embedding dimensions out of the full 768, depending on their project requirements. Smaller embedding sizes come at a minimal performance loss while providing lower storage costs and faster computing benefits. To put Nomic Embed’s abilities in context, consider a company that processes a high volume of PDFs—say 100,000 documents per month—with an average length of 20 pages each. To improve database retrieval speed, these documents can be partitioned into smaller chunks, such as 2 pages per chunk (see Figure 1 below). Assuming a full page typically contains around 500 words, each document chunk would consist of approximately 1000 words. Figure 1: PDF chunking, embedding creation with Nomic, and storage into MongoDB Embedding models process words as numerical tokens where a general rule of thumb is 3/4 word = 1 token. One embedding is more than sufficient to represent a document chunk in this case, as 4/3 * 1000 tokens fit nicely in Nomic Embed’s long context window. A PDF search application for this company would require 100,000 PDFs x 10 chunks = 1,000,000 embeddings. Benchmarked on Nomic’s AWS Sagemaker real-time inference offering on a single GPU ml.g5.xlarge instance, the total runtime is under 4 hours for a total of $15.60 per month. A similar performing embedding model, such as OpenAI’s text-embedding-3-small, costs $26.66 per month to generate the same number of embeddings. Once the embeddings are stored in MongoDB Atlas, it’s possible to create an Atlas Vector Search index to unlock their potential. Building a PDF search application at this point becomes straightforward. The query text is vectorized, and the embedding is fed to Atlas Vector Search to retrieve similar vectors. The result is a list of the most semantically similar sections of the PDF relevant to the original text. This is a significant leap forward compared to a simple “ctrl-f” search, as it captures meaning rather than just keyword matches. This process can be further improved by implementing a retrieval-augmented generation (RAG) pipeline, combining Atlas Vector Search and a large language model (LLMs). As shown in Figure 2, this approach allows users to ask questions in natural language about the content of the PDF. The relevant documents are then fed to the LLM as context, and the AI is able to provide structured answers by leveraging knowledge about the data. Figure 2: Retrieval Augmented Generation flow with Nomic In a nutshell, Nomic and MongoDB provide the building blocks for advanced RAG applications, equipping developers with a cost-effective and integrated toolset. Seamless integration, supercharged search: Nomic Embeddings in MongoDB Atlas MongoDB Atlas seamlessly ingests Nomic embeddings with its flexible document storage format. Depending on the application, embeddings and additional metadata can be neatly stored together or separately in MongoDB collections. MongoDB Atlas and Nomic Embed are both available as AWS Marketplace offerings for same-VPC deployments. MongoDB Atlas Stream Processing is a perfect fit for Nomic Embed’s high throughput capabilities. Incoming data streams are robustly processed and can be combined with MongoDB Database Triggers to generate embeddings for immediate downstream use. Given Nomic Embed’s lightweight nature and offline capabilities (via private or local deployments from open source), embeddings can be produced and ingested into MongoDB at extremely rapid transfer rates. MongoDB Atlas Vector Search delivers a fast and accessible method to leverage Nomic embeddings for semantic search . MongoDB Atlas Vector Search lets you combine these fast vector search queries with traditional database queries on various metadata, providing a flexible and powerful analytics tool for data insights, user recommendations, and more. Industry use cases PDFs are ubiquitous. In one way or another, every company in the world needs to extract and analyze PDF content to make business decisions or comply with regulations. Let’s have a look at some industry use cases: Financial services The financial services industry is constantly bombarded with essential updates, including market data, financial statements, and regulatory changes. Some of this information such as financial statements, annual reports, and regulatory filings, resides in PDF format. Efficient and reliable navigation through these documents is crucial for gaining a competitive edge in investment decision-making. For example, investors scrutinize key financial metrics such as revenue growth, profit margins, and cash flow trends extracted from income statements, balance sheets, and cash flow statements. They use this information to compare them between companies, gauging their strategic direction, risks, and competitive positioning before investing. However, accessing and extracting data from these PDFs can be a time-consuming challenge, hindering agility in the fast-paced financial landscape. Here, semantic search for financial PDFs offers a dramatic improvement in information discovery. By leveraging semantic search technology, which interprets the intent and contextual meaning behind a search query, FSI professionals can significantly enhance their ability to find relevant information. This applies equally to the broader financial industry, including areas like market analysis, performance evaluation, and many more. Retail In the retail industry, the challenge of processing hundreds of thousands of invoices from numerous suppliers annually is a common scenario. Most invoices are in PDF format, and the challenge arises from the combination of invoice volume and the variability in layouts and languages from one supplier to another. This makes manual processing impractical and error-prone. The question becomes: how can retailers automate this end-to-end process efficiently and accurately? The answer lies in solutions that utilize advanced technologies like AI and PDF search capabilities. By leveraging these solutions, retailers can automatically scan invoices, extract relevant data, and validate it against purchase orders and received goods. Moreover, these solutions offer the flexibility to adapt to different invoice layouts without the need for templates, ensuring scalability and efficiency gains. With increased automation rates and improved accuracy levels, retailers can shift focus from low-value manual tasks to more strategic initiatives, accelerating their digital transformation journey and unlocking significant cost savings along the way. Manufacturing & motion There are vast amounts of unstructured data contained in PDFs across the Manufacturing and Automotive industries, from machine instruction booklets to production or maintenance guidelines, Six Sigma best practices, production results, and team lead annotations. All this valuable data must be shared, read, and stored manually, introducing significant friction when it comes to leveraging its full potential. With MongoDB Atlas Vector Search, manufacturing companies have the opportunity to completely revive this data and make real use of it in their day-to-day operations, all while reducing the time spent managing these manuals and having everything ready to be accessed. It is as simple as vectorizing the documents, uploading them to MongoDB Atlas, and connecting a RAG-enabled application to this data source. With this, operators in a manufacturing plant can describe a problem to a smart interface and ask how to troubleshoot it. The interface will retrieve the specific parts of the manual that show how to address the issue. Moreover, it can also retrieve notes from previous operators, team leaders, or previous troubleshooting efforts, providing a very rich context and accelerating the problem-solving process. PDF RAG-enabled applications in manufacturing open up a wide range of operational improvements that directly benefit the company's bottom line. PDF search at scale In today’s data-driven world, extracting insights from unstructured data like PDFs is challenging. Traditional search methods fall short, but advancements in AI like, Nomic Embed, have revolutionized PDF search. By leveraging MongoDB with Nomic Embed, organizations gain a powerful and cost-effective AI-powered solution for large PDF collections. Nomic Embed’s extensive context, high throughput capabilities, and MongoDB’s seamless integration and powerful analytics enable efficient and reliable PDF search applications. This translates to enhanced data accessibility, faster decision-making, and improved operational efficiency. Don't waste time struggling with traditional PDF search! Apply for an innovation workshop to discuss what’s possible with our industry experts. If you would like to discover more about MongoDB and GenAI: Building a RAG LLM with Nomic Embed and MongoDB From Relational Databases to AI: An Insurance Data Modernization Journey

April 30, 2024

Retrieval Augmented Generation for Claim Processing: Combining MongoDB Atlas Vector Search and Large Language Models

Following up on our previous blog, AI, Vectors, and the Future of Claims Processing: Why Insurance Needs to Understand The Power of Vector Databases , we’ll pick up the conversation right where we left it. We discussed extensively how Atlas Vector Search can benefit the claim process in insurance and briefly covered Retrieval Augmented Generation (RAG) and Large Language Models (LLMs). Check out our AI resource page to learn more about building AI-powered apps with MongoDB. One of the biggest challenges for claim adjusters is pulling and aggregating information from disparate systems and diverse data formats. PDFs of policy guidelines might be stored in a content-sharing platform, customer information locked in a legacy CRM, and claim-related pictures and voice reports in yet another tool. All of this data is not just fragmented across siloed sources and hard to find but also in formats that have been historically nearly impossible to index with traditional methods. Over the years, insurance companies have accumulated terabytes of unstructured data in their data stores but have failed to capitalize on the possibility of accessing and leveraging it to uncover business insights, deliver better customer experiences, and streamline operations. Some of our customers even admit they’re not fully aware of all the data in their archives. There’s a tremendous opportunity to leverage this unstructured data to benefit the insurer and its customers. Our image search post covered part of the solution to these challenges, opening the door to working more easily with unstructured data. RAG takes it a step further, integrating Atlas Vector Search and LLMs, thus allowing insurers to go beyond the limitations of baseline foundational models, making them context-aware by feeding them proprietary data. Figure 1 shows how the interaction works in practice: through a chat prompt, we can ask questions to the system, and the LLM returns answers to the user and shows what references it used to retrieve the information contained in the response. Great! We’ve got a nice UI, but how can we build an RAG application? Let’s open the hood and see what’s in it! Figure 1: UI of the claim adjuster RAG-powered chatbot Architecture and flow Before we start building our application, we need to ensure that our data is easily accessible and in one secure place. Operational Data Layers (ODLs) are the recommended pattern for wrangling data to create single views. This post walks the reader through the process of modernizing insurance data models with Relational Migrator, helping insurers migrate off legacy systems to create ODLs. Once the data is organized in our MongoDB collections and ready to be consumed, we can start architecting our solution. Building upon the schema developed in the image search post , we augment our documents by adding a few fields that will allow adjusters to ask more complex questions about the data and solve harder business challenges, such as resolving a claim in a fraction of the time with increased accuracy. Figure 2 shows the resulting document with two highlighted fields, “claimDescription” and its vector representation, “claimDescriptionEmbedding” . We can now create a Vector Search index on this array, a key step to facilitate retrieving the information fed to the LLM. Figure 2: document schema of the claim collection, the highlighted fields are used to retrieve the data that will be passed as context to the LLM Having prepared our data, building the RAG interaction is straightforward; refer to this GitHub repository for the implementation details. Here, we’ll just discuss the high-level architecture and the data flow, as shown in Figure 3 below: The user enters the prompt, a question in natural language. The prompt is vectorized and sent to Atlas Vector Search; similar documents are retrieved. The prompt and the retrieved documents are passed to the LLM as context. The LLM produces an answer to the user (in natural language), considering the context and the prompt. Figure 3: RAG architecture and interaction flow It is important to note how the semantics of the question are preserved throughout the different steps. The reference to “adverse weather” related accidents in the prompt is captured and passed to Atlas Vector Search, which surfaces claim documents whose claim description relates to similar concepts (e.g., rain) without needing to mention them explicitly. Finally, the LLM consumes the relevant documents to produce a context-aware question referencing rain, hail, and fire, as we’d expect based on the user's initial question. So what? To sum it all up, what’s the benefit of combining Atlas Vector Search and LLMs in a Claim Processing RAG application? Speed and accuracy: Having the data centrally organized and ready to be consumed by LLMs, adjusters can find all the necessary information in a fraction of the time. Flexibility: LLMs can answer a wide spectrum of questions, meaning applications require less upfront system design. There is no need to build custom APIs for each piece of information you’re trying to retrieve; just ask the LLM to do it for you. Natural interaction: Applications can be interrogated in plain English without programming skills or system training. Data accessibility: Insurers can finally leverage and explore unstructured data that was previously hard to access. Not just claim processing The same data model and architecture can serve additional personas and use cases within the organization: Customer Service: Operators can quickly pull customer data and answer complex questions without navigating different systems. For example, “Summarize this customer's past interactions,” “What coverages does this customer have?” or “What coverages can I recommend to this customer?” Customer self-service: Simplify your members’ experience by enabling them to ask questions themselves. For example, “My apartment is flooded. Am I covered?” or “How long do windshield repairs take on average?” Underwriting: Underwriters can quickly aggregate and summarize information, providing quotes in a fraction of the time. For example, “Summarize this customer claim history.” “I Am renewing a customer policy. What are the customer's current coverages? Pull everything related to the policy entity/customer. I need to get baseline info. Find relevant underwriting guidelines.” If you would like to discover more about Converged AI and Application Data Stores with MongoDB, take a look at the following resources: RAG for claim processing GitHub repository From Relational Databases to AI: An Insurance Data Modernization Journey Modernize your insurance data models with MongoDB and Relational Migrator Head over to our quick-start guide to get started with Atlas Vector Search today.

April 18, 2024

From Relational Databases to AI: An Insurance Data Modernization Journey

Imagine you’re a data architect, a developer, or a data engineer at an insurance company. Management has asked you and your team to build a new AI claim adjustment system, a customer-facing LLM-powered chatbot, and an application to streamline the underwriting process. However, doing so is far from straightforward due to the challenges you face on a daily basis. The bulk of your time is spent navigating your company’s outdated legacy systems, which were built in the 1970s and 1980s. Some of these legacy platforms were written in COBOL and CICS, and today very few people on your team know how to develop and maintain those technologies. Moreover, the data models you work with are another source of frustration. Every interaction with them is a reminder of the intricate structures that have evolved over time, making data manipulation and analysis a nightmare. In sum, legacy systems are preventing your team—and your company—from innovating and keeping up with both your industry and customer demands. Whether you’re trying to modernize your legacy systems to improve operational efficiency, boost developer productivity, or if you want to build AI-powered apps that integrate with large language models (LLMs), MongoDB has a solution for that. In this post, we’ll walk you through a journey that starts with a relational data model refactored into MongoDB collections, vectorization and querying of unstructured data and, finally, retrieval augmented generation (RAG) : asking large language models (LLMs) questions about data in natural language. Identifying, modernizing, and storing the data Our journey starts with an assessment of the data sources we want to work with. As shown below, we can bucket the data into three different categories: Structured legacy data: Tables of claims, coverages, billings, and more. Is your data locked in rigid relations schemas? This tutorial is a step-by-step guide on how to migrate a real-life insurance relational model with the help of MongoDB Relational Migrator , refactoring 21 tables to only five MongoDB collections. Structured data (JSON): You might have files of policies, insurance products, or forms in JSON format. Check out our docs to learn how to insert those into a MongoDB collection. Unstructured data (PDFs, Audios, Images, etc.): If you need to create and store a numerical representation (vector embedding) of, for instance, claim-related photos of accidents or PDFs of policy guidelines, you can have a look at this blog that will walk you through the process of generating embeddings of pictures of car crashes and persisting them alongside existing fields in a MongoDB collection. Figure 1: Storing different types of data into MongoDB Regardless of the original format or source, our data has finally landed into MongoDB Atlas into what we call a Converged AI Data Store, which is a platform that centrally integrates and organizes enterprise data, including vectors, that enable the development of ML- and AI-powered applications. Accessing, experimenting, and interacting with the data It’s time to put the data to work. The Converged AI Data Store unlocks a plethora of use cases and efficiency gains, both for the business and for developers. The next step of the journey is about the different ways we can interact with our data: Database and Full Text Search: Learn how to run database queries, start from the basics and move up to advanced features such as facets, fuzzy search, autocomplete, highlighting, and more with Atlas Search . Vector Search: We can finally leverage unstructured data. The Image Search blog we mentioned earlier also explains how to create a Vector Search index and run vector queries against embeddings of photos. RAG: Combining Vector Search and the power of LLMs, it is possible to interact in natural language with our data (see Figure 2 below), asking complex questions and getting detailed answers. Follow this tutorial to become a RAG expert. Figure 2: Retrieval augmented generation (RAG) diagram where we dynamically combine our custom data with the LLM to generate reliable and relevant outputs Having explored all the different ways we can ask questions of the data, we made it to the end of our journey. You are now ready to modernize your company’s systems and finally be able to keep up with the business’ demands. What will you build next? If you would like to discover more about Converged AI and Application Data Stores with MongoDB, take a look at the following resources: AI, Vectors, and the Future of Claims Processing: Why Insurance Needs to Understand The Power of Vector Databases Build a ML-Powered Underwriting Engine in 20 Minutes with MongoDB and Databricks

March 14, 2024

Every Operational Data Layer (ODL) Can Benefit From Search

In today's digital landscape, organizations frequently encounter the daunting challenge of managing complex data architectures. Multiple systems, diverse technologies, and a variety of programming languages become entwined, making smooth operations a significant struggle. A frequent example of this issue is seen in some major banks still relying on a banking system built in the 1970s, continuing to run on a mainframe with minimal updates. The consequence is a complex architecture as seen in Figure 1, where data is scattered across various systems, creating inefficiencies and hindering seamless operations. Offloading the data from one or more monolithic systems is a well-proven approach to increase agility and deliver new innovative services to external and internal customers. In this blog we will speak about how search can make Operational Data Layers (ODL) – an architectural pattern that centrally integrates and organizes siloed enterprise data, making it available to consuming applications – an even more powerful tool. Figure 1: Complex Data Architecture Operational Data Store (ODS) as a solution To tackle the complexities of their existing data architecture, organizations have turned to Operational Data Stores (ODS). An ODS serves as a secondary data store, holding data replicated of primary transactional systems as seen in Figure 2. Organizations can feed their ODS with change data capture technologies. Figure 2: Conceptual model of an Operational Data Layer The evolutionary path of adoption Implementing an ODS requires a thoughtful approach that aligns with the organization's digital transformation journey. Typically, the adoption path consists of several stages as seen in Figure 3. Initially, organizations focus on extracting data from one system into their Operational Data Store, allowing them to operate on a more unified dataset. Gradually, they can retire legacy systems and eliminate the need for intermediate data streams. The key benefit of this incremental approach is that it delivers value (e.g. offloading mainframe operations) to the business at every step by eliminating the need for a complete overhaul and minimizing disruption. Figure 3: Evolution of a basic ODS into a system of records Areas of application ODS are used to support the business in three different ways: Data Access Layers allow organizations to free their data from the limitations imposed by data silos and technological variations. Organizations consolidate data from different sources that often use different data storage technologies and paradigms, creating a unified view that simplifies data access and analysis. This pattern is mainly used to enable modern APIs, speed up development of new customer services, and improve responsiveness and resiliency. Operational Data Layer (ODL): The ODL is an internal-focused layer that aids in complex processing workflows. It serves as a hub for orchestrating and managing data across various stages of processing. The ODL empowers organizations to enrich and improve data iteratively, resulting in more powerful and accurate insights. It provides a holistic view of data and process information, an improved customer experience, and reduced operational costs. Developer ODL: Building a developer-focused ODL can provide significant advantages during the development cycle. By making data readily available to developers, organizations can accelerate the development process and gain a comprehensive understanding of their data structures. This, in turn, helps in identifying and addressing issues early on, leading to improved data models and better system performance. In a nutshell, this pattern helps reduce developer training time, streamlines development and speeds up testing and test automation. The power of search in ODS So how can every ODL benefit from search capabilities and how can MongoDB Atlas Search help? Atlas Search plays a crucial role in maximizing the value of an ODS. When we have questions or are searching for an answer, our natural interaction with information is primarily through search. We excel at interpreting imprecise queries and extracting relevant information from vast datasets. By incorporating search capabilities with Atlas Search into an ODS, organizations can empower their users to explore, analyze, and gain valuable insights from their data. Consider the example of a banking organization with a complex web of interconnected systems. Searching for specific transactions or identifying patterns becomes a daunting task, especially when dealing with numeric identifiers across multiple systems. Traditionally, this involved manual effort and navigating through numerous systems. However, with a search-enabled ODS, users can quickly query the relevant data and retrieve candidate matches. This greatly streamlines the process, saves time, and enhances efficiency. Practical examples: Leveraging ODS and Atlas Search Let's explore a few practical examples that demonstrate the power of ODS and the Atlas Search functionality. Operational Data Layer for Payments Processing: A financial institution implemented an ODS-based operational layer for processing payments. By aggregating data from multiple sources and leveraging search capabilities, they achieved faster and more accurate payment processing. This enabled them to investigate issues, ensure consistency, and deliver a superior customer experience. Customer 360 View: Another organization leveraged an ODS to create a comprehensive view of their customers, empowering relationship managers and bank tellers with a holistic understanding. With search functionality, they could quickly locate customer information across various systems, saving time and improving customer service. Post-trade Trading Platform: A global broker operating across 25 different exchanges utilized an ODS to power their post-trade trading platform. By leveraging search capabilities, they simplified the retrieval of data from various systems, leading to efficient and reliable trading operations. Conclusion In the dynamic world of data management, Operational Data Stores (ODS) have emerged as a crucial component for organizations seeking to streamline their data architectures. By adopting an incremental approach and leveraging search functionality such as Atlas Search , organizations can enhance data accessibility, improve operational efficiency, and drive valuable insights. The power of search within ODS lies in its ability to simplify data retrieval, accelerate development cycles, and enable users to interact with data in a more intuitive and efficient manner. By embracing these practices, organizations can unlock the true potential of their data, paving the way for a more productive and data-driven future. For more information on Atlas Search, check out the following resources: Watch this MongoDB.local talk which expands on this blog: Every ODS Needs Search: A Practical Primer Based on Client Experiences Discover MongoDB’s search functionalities Learn how Helvetia accelerates cloud-native modernization by 90% with MongoDB Atlas and MongoDB Atlas Search

November 1, 2023

AI, Vectors, and the Future of Claims Processing: Why Insurance Needs to Understand The Power of Vector Databases

We’re just under a year since OpenAI released ChatGPT, unleashing a wave of hype, investment, and media frenzy around the potential of generative AI to transform how we do business and interact with the world. But while the majority of the investment dollars and media attention zeroed in on the disruptive capabilities of large language models (LLMs), there’s a crucial component underpinning this breakthrough technology that hasn’t received the attention it deserves; the humble vector database. Vector databases, a type of database that stores numeric representations (or vectors) of your data, allow advanced machine learning algorithms to make sense of unstructured data like images, sound, or unstructured text, and return relevant results. (You can read more about vector search databases and vector search on our Developer Hub .) For industries dealing with vast amounts of data, such as insurance, the potential impact of vector databases and vector search is immense. In this blog, we will focus on how vectors can speed up and increase the accuracy of claim adjustment. Check out our AI resource page to learn more about building AI-powered apps with MongoDB. The claims process… vectorized! The process of claim adjustment is time-consuming and error-prone. As one insurance client recently told us, “If an adjuster touches it, we lose money.” For each claim, adjusters need to go through past claims from the client and related guidelines, which are usually scattered across multiple systems and formats, making it difficult to find relevant information and time-consuming to produce an accurate estimate of what needs to be paid. For this blog, let’s use the example of a car accident claim. In our example, a car just crashed into another vehicle. The driver gets out and starts taking pictures of the damage, uploading them to their car insurance app, where an adjuster receives the photos. Typically, the adjuster would painstakingly comb through past claims and parse guidelines to work up an estimate of the damage and process the claim. But with a vector database, the adjuster can simply ask an AI to “show me images similar to this crash,” and a Vector Search -powered system can return photos of car accidents with similar damage profiles from the claims history database. The adjuster is now able to quickly compare the car accident photos with the most relevant ones in the insurer's claim history. What’s more, with MongoDB it is possible to store vectors as arrays alongside existing fields in a document. In our car crash scenario, this means that our fictional adjuster can not only retrieve the most similar pictures but also have access to complementary information stored in the same database: claim notes, loss amount, car model, car manufacturing year, etc. The adjuster now has a comprehensive view of past accidents and how they were handled by the insurance company, in seconds. For this use case, we have focused on image search, but most data formats can be vectorized, including text and sound. This means that an adjuster could query using claim notes and find similar notes in the claim history or related paragraphs in the guidelines. Vector Search is an extremely powerful tool as it unlocks access to unstructured data that was previously hard to work with such as PDFs, images, or audio files. How does this work in practice? Let’s go through each step of the process: A search index is configured on an existing collection in MongoDB Atlas An image set is sent to an embedding model that generates the image vectors The vectors are then stored in Atlas, alongside the current metadata found in the collection Figure 1: A dataset of photos of past accidents is vectorised and stored in Atlas We run our query against the existing database and Vector Search returns the most similar images Figure 2: An image similarity query is performed, and the 5 top similar images are returned. Example user interface: A claim-adjuster dashboard leveraging Vector Search Figure 3: UI of the claim adjuster application We can go a step further and use our vectors to provide an LLM with the context necessary to generate more reliable and accurate outputs, also known as Retrieval Augmented Output (RAG). These outputs can include: Natural language processing for tasks such as chatbots and question-answering — think of a claim adjuster that interacts with a conversational interface and asks questions such as: “Give me the average of the loss amount for accidents related to one of the photos of claim XYZ” or “Summarize the content of the guidelines related to this accident” Computer vision and audio processing for image classification and object detection to speech recognition and translation Content generation, including creating text-based documentation, reports, and computer code, or converting text to an image or video Figure 4 brings together the workflow enabling RAG for the LLM. Figure 4: Dynamically combining your custom data with the LLM to generate reliable and relevant outputs If you’re interested in seeing how to do this in practice and start prototyping, check out our GitHub repository and dive right in! Go hands-on! Vector databases and vector search will transform how insurers do business. In this blog we have explored how vectors can be leveraged to speed up the work of claim adjusters, which directly translates to an improved customer experience and, crucially, cost savings through faster claims processing and enhanced accuracy. Elsewhere, vector search could be used for: Enhanced customer service. Imagine being able to instantly pull up comprehensive policyholder profiles, their claims history, and any related information with a simple search. Vector search makes this possible, facilitating better interactions and more informed decisions. Personalized Recommendations. As AI-driven personalization becomes the gold standard, vector search aids in accurately matching policyholders with tailor-made insurance products and services that meet their unique needs. Scaled AI Efforts. Scale AI implementations across the organization. From improving customer service chatbots to detecting fraudulent activities, vector-based models can handle tasks more efficiently than traditional methods. Atlas Vector Search goes one step further. By unifying the operational database and vector store in a single platform, MongoDB Atlas turbocharges the process of building semantic search and AI-powered applications, empowering insurers to quickly build applications that take advantage of the value of your vast troves of data. Find out why leading insurers trust MongoDB . Head over to our quick-start guide to get started with Atlas Vector Search today.

October 4, 2023

Pagos Digitales - Foco en America Latina

Impulsado por las nuevas tecnologías y las tendencias globales, el mercado de pagos digitales está floreciendo en todo el mundo. Con una valoración de más de $ 68 mil millones en 2021 y expectativas de crecimiento de doble dígitos durante la próxima década, los mercados emergentes están liderando el camino en términos de expansión relativa. Un panorama que una vez fue dominado por grandes bancos y compañías de tarjetas de crédito ahora está siendo atacado por disruptores interesados en capturar una cuota de mercado. Según un estudio de McKinsey , hay cuatro factores principales en el núcleo de esta transformación: Adopción de pagos cashless inducidos por la pandemia E-commerce Impulso del gobierno a los pagos digitales Fintech Cabe destacar como la pandemia ha sido una gran catalizadora en el aumento de la inclusión financiera al fomentar medios de pago alternativos y nuevas formas de pedir préstamos y ahorrar. Estos nuevos servicios digitales son, de hecho, más fáciles de acceder y consumir. En América Latina y el Caribe (LAC), la Covid provocó un aumento dramático en los pagos sin efectivo, el 40% de los adultos realizó una compra en línea, el 14% de los cuales lo hizo por primera vez en su vida. El e-commerce ha visto un crecimiento estelar, con una penetración que probablemente superará el 70% de la población en 2022, los actores nacionales y globales, incluidos Mercado Libre y Falabella, están impulsando la innovación de pagos digitales para proporcionar una experiencia de cliente cada vez más fluida en sus plataformas. Los bancos centrales están promoviendo nuevas infraestructuras para pagos en tiempo real, con el objetivo de proporcionar una tecnología más económica y rápida para la transferencia de dinero tanto para ciudadanos como para empresas. PIX es probablemente el mayor caso de éxito. Una plataforma de pagos instantáneos desarrollada por el Banco Central do Brasil (Banco Central de Brasil), comenzó a operar en noviembre de 2020 y, en 18 meses, más del 75% de los brasileños adultos lo había utilizado al menos una vez. La red procesa alrededor de $250 mil millones en pagos anualizados, aproximadamente el 20 % del gasto total de los clientes. Los usuarios (incluidos los trabajadores autónomos) pueden enviar y recibir pagos en tiempo real a través de una interfaz sencilla, 24 horas al día, 7 días a la semana y de forma gratuita. Las empresas tienen que pagar una pequeña tasa. En Estados Unidos, la Federal Reserve ha anunciado que lanzará FedNow a mediados de 2023, una red de pagos con características similares a PIX. Estas iniciativas tienen como objetivo resolver problemas como los acuerdos lentos y la baja interoperabilidad entre las partes. Los bancos establecidos aún poseen la mayor parte del mercado de pagos digitales, sin embargo, las fintech han estado amenazando este dominio, aprovechando su agilidad para actuar rápidamente y satisfacer las necesidades de los clientes de formas más innovadoras y creativas. Sin el lastre de los sistemas legacy, o los modelos comerciales atados a las viejas redes de pago, las fintechs no han dudado en probar y adoptar nuevas tecnologías y sistemas de pago. Su estrategia enfocada a móvil y digital les está ayudando a capturar y retener al segmento más joven del mercado, que exige experiencias integradas en tiempo real con las que pueden interactuar tan sólo pulsando un botón. Un ejemplo es Paggo, una fintech guatemalteca que ayuda a las empresas a agilizar los pagos permitiéndoles compartir un simple código QR que los clientes pueden escanear para transferir dinero. El panorama de los pagos no solo se ve afectado por fuerzas externas, los cambios que provienen de la industria también están remodelando la experiencia del cliente y habilitando nuevos servicios. La norma ISO 20022 es un estándar flexible para el intercambio de datos que está siendo adoptado por la mayoría de las instituciones de la industria financiera para estandarizar la forma en que se comunican entre sí, optimizando así la interoperabilidad. Gracias a la adopción de ISO 20022, es más sencillo para los bancos leer y procesar mensajes, lo que se traduce en procesos internos más fluidos y una automatización más sencilla. Para los usuarios finales, esto significa pagos más rápidos y potencialmente más baratos, así como aplicaciones financieras más ricas e integradas. 3DS2 está siendo adoptado por el ecosistema de pagos con tarjeta de crédito y débito. Se trata, esencialmente, de una solución de autenticación de pagos que sirve para transacciones de compras en línea. De manera similar a ISO 20022, el usuario final ni siquiera conocerá la tecnología subyacente, sino que sólo percibirá un pago más fluido y sin fricciones. 3DS2 evita que el usuario sea redirigido a su aplicación bancaria para confirmar la compra de un artículo en línea, ahora todo sucede en el sitio web o la aplicación del vendedor. Todo esto se hace al mismo tiempo que se mejora la detección y prevención de fraude; esta nueva solución dificulta el uso de la tarjeta de crédito o débito sin autorización. El beneficio de la adopción de 3DS2 es doble: por un lado, el usuario tiene mayor confianza, por otro, los comerciantes están más contentos debido a una menor tasa de abandono de clientes; de hecho, el miedo al fraude en el proceso de pago suele ser una de las principales razones para abandonar una compra en línea. Esta solución es especialmente ventajosa para la región de LAC, donde, a pesar de la amplia adopción del comercio electrónico, las personas aún se muestran reacias a realizar transacciones online. Uno de los factores que contribuyen a esta incongruencia es el miedo al fraude. Cybersource informó que en 2019, una quinta parte de las transacciones de comercio electrónico se marcaron como potencialmente fraudulentas y el 20 % se bloquearon, es decir, más de 6 veces el promedio mundial. Es evidente que la adopción de 3DS2 por parte de las plataformas fomentará la confianza de los compradores online. Vale la pena mencionar también el papel que juegan la blockchain y las criptomonedas. Redes como Ethereum o Lightning son una alternativa descentralizada a las redes de pago más tradicionales. En los últimos años, más y más personas han comenzado a utilizar esta tecnología debido a sus características únicas: tarifas bajas, tiempo de procesamiento rápido y alcance global. América Latina ha visto una explosión en la adopción debido a varios factores, siendo muy prominentes las remesas y los pagos en stablecoins. Los proveedores de servicios de remesas tradicionales son, de hecho, más lentos y más caros que las redes de blockchain. Especialmente en Argentina, un número cada vez mayor de trabajadores autónomos exigen que se les pague en USDC o USDT, dos stablecoins vinculadas al valor del dólar, para así poder protegerse de la inflación. Está claro que el panorama de los pagos está evolucionando rápidamente, por un lado, los clientes esperan productos y servicios que se integren a la perfección con todos los aspectos de sus vidas digitales. Cada vez que una aplicación se percibe como lenta, mal diseñada o simplemente le faltan algunas funciones, el usuario puede cambiar fácilmente a la alternativa de un competidor. Por otro lado, la cantidad de actores que compiten por su participación en el mercado de pagos digitales está en auge, lo que reduce los márgenes de los productos tradicionales. La única forma de navegar con éxito en este entorno complejo es invertir en innovación y en la creación de nuevos modelos de negocio. No existe un planteamiento único para enfrentarse a tales desafíos, pero no hay duda de que toda empresa con éxito necesita aprovechar el poder de los datos y la tecnología para proporcionar a sus clientes la experiencia personalizada y en tiempo real que exigen. En MongoDB creemos que una base sólida para lograrlo está representada por una developer data platform altamente flexible y escalable, que permite a las empresas innovar más rápido y monetizar mejor sus datos de pago. ¡Visite la web de Servicios Financieros de MongoDB para obtener más información!

March 15, 2023

Digital Payments - Latin America Focus

Pushed by new technologies and global trends, the digital payments market is flourishing all around the world. With a valuation at over USD 68 billion in 2021 and expectations to grow to double digits over the next decade, emerging markets are leading the way in terms of relative expansion. A landscape once dominated by incumbents - big banks and credit card companies - is now being attacked by disruptors that are interested in capturing a market share. According to a McKinsey study , there are four major factors at the core of this transformation: Pandemic-induced cashless payments adoption E-commerce Government push for digital payments Fintechs Interestingly, the pandemic has been a big catalyst in the rise of financial inclusion by encouraging alternative means of payment and new ways of borrowing and saving. These new digital services are in fact easier to access and to consume. In Latin America and the Caribbean (LAC), Covid spurred a dramatic increase in cashless payments, 40% of adults made an online purchase, 14% of which did it for the first time in their life. E-commerce has experienced a stellar growth, with a penetration that will likely exceed 70% of the population in 2022, domestic and global players including Mercado Libre and Falabella are pushing digital payment innovation to provide an ever smoother customer experience on their platforms. Central banks are promoting new infrastructure for near real-time payments, with the goal of providing a cheaper and faster technology for money transfer both for citizens and businesses. PIX is probably the biggest success story. An instant payment platform developed by Banco Central do Brasil (Brazil Central Bank), it began operating in November 2020, and within 18 months, over 75% of adult Brazilians had used it at least once. The network processes around $250 Billion in annualized payments, about 20% of total customer spend. Users (including self employed workers) can send and receive real-time payments through a simple interface, 24/7 and free of charge. Businesses have to pay a small fee. In the United States, the Federal Reserve has announced it will be launching FedNow in mid 2023, a payment network with characteristics similar to PIX. These initiatives aim to solve issues such as slow settlements and low interoperability between parties Incumbent banks still own the lion’s share of the digital payment market, however, fintechs have been threatening this dominance by leveraging their agility to execute fast and cater to customer needs in innovative and creative ways. Without the burden of legacy systems to weigh them down, or business models tied to old payment rails, fintechs have been enthusiastic testers and adopters of new technologies and payment networks. Their mobile and digital first approach is helping them capture and retain the younger segment of the market, which expect integrated real-time experiences they can consume at the touch of a button. An example is Paggo, a Guatemalan fintech that helps businesses streamline payments by enabling them to share a simple QR code that customers can scan to transfer money. The payment landscape is not only affected by external forces, changes coming from within the industry are also reshaping the customer experience and enabling new services: ISO 20022 is a flexible standard for data interchange that is being adopted by most financial industry institutions to standardize the way they communicate between each other, thus streamlining interoperability. Thanks to the adoption of ISO 20022, it’s more straightforward for banks to read and process messages, this translates into smoother internal processes and easier automatization. For end users this means faster and potentially cheaper payments, as well as richer and more integrated financial apps. 3DS2 is being embraced by the credit and debit card payments ecosystem. It essentially is a payment authentication solution that serves online shopping transactions. Similarly to ISO 20022, the end user won’t even be aware of the underlying technology, but will only experience a smoother and frictionless checkout. 3DS2 avoids the user being redirected to their banking app for confirmation when buying an item online, now it’s all happening on the website or app of the seller. This is all done while also enhancing fraud detection and prevention; this new solution makes it harder to use one’s credit or debit card without authorization. 3DS2 adoption benefit is twofold: on the one hand the user has increased confidence, on the other hand merchants are happier because of a lower customer abandonment rate, in fact fear of fraud at checkout is usually one of the main reasons for ditching an online purchase. This solution is especially beneficial for the LAC region, where, despite wide adoption of e-commerce, people are still reluctant to transact online. One of the factors contributing to this oddity is fear of fraud, Cybersource reported that in 2019, a fifth of e-commerce transactions were flagged as potentially fraudulent and 20% were blocked, that’s over 6 times the global average. It is evident how online shoppers’ trust will be encouraged by the platforms’ adoption of 3DS2. It is worth also mentioning the role played by blockchain and cryptocurrencies. Networks such as Ethereum or Lightning are effectively a decentralized alternative to the more traditional payment rails. Over the last few years more and more people have started to use this technology because of its unique features: low fees, fast processing time and global reach. Latin America has seen an explosion in adoption due to several factors, remittances and stablecoin payments being highly prominent. Traditional remittance service providers are in fact slower and more expensive than blockchain networks. Especially in Argentina, an increasing number of autonomous workers are demanding to be paid in USDC or USDT, two stablecoins pegged to the value of the dollar, thus being able to stave off inflation. It is clear that the payment landscape is rapidly evolving, on the one end customers expect products and services that integrate seamlessly with every aspect of their digital lives. Whenever an app is perceived as slow, poorly designed or simply missing some features, the user can easily switch to a competitor’s alternative. On the other hand, the number of players contending for their share in the digital payments market is expanding, driving down margins of traditional products. The only way to successfully navigate this complex environment is investing in innovation and in creating new business models. There’s no unique approach to face such challenges, but there’s no doubt that every successful business needs to harness the power of data and technology to provide its customers with the personalized and real-time experience they demand. We at MongoDB believe that a solid foundation to achieve that is represented by a highly flexible and scalable developer data platform, allowing companies to innovate faster and better monetize their payment data. Visit our Financial Services web page to learn more!

March 14, 2023

Build a ML-Powered Underwriting Engine in 20 Minutes with MongoDB and Databricks

The insurance industry is undergoing a significant shift from traditional to near-real-time data-driven models, driven by both strong consumer demand, and the urgent need for companies to process large amounts of data efficiently. Data from sources such as connected vehicles and wearables are utilized to calculate precise and personalized premium prices, while also creating new opportunities for innovative products and services. As insurance companies strive to provide personalized and real-time products, the move towards sophisticated and real-time data-driven underwriting models is inevitable. To process all of this information efficiently, software delivery teams will need to become experts at building and maintaining data processing pipelines. This blog will focus on how you can revolutionize the underwriting process within your organization, by demonstrating how easy it is to create a usage-based insurance model using MongoDB and Databricks. This blog is a companion to the solution demo in our Github repository . In the GitHub repo, you will find detailed step-by-step instructions on how to build the data upload and transformation pipeline leveraging MongoDB Atlas platform features, as well as how to generate, send, and process events to and from Databricks. Let’s get started. Part 1: the Use Case Data Model Part 2: the Data Pipeline Part 3: Automated Decision Support with Databricks Part 1: The use case data model Figure 1: Entity relationship diagram - Usage-based insurance example Imagine being able to offer your customers personalized usage-based premiums that take into account their driving habits and behavior. To do this, you'll need to gather data from connected vehicles, send it to a Machine Learning platform for analysis, and then use the results to create a personalized premium for your customers. You’ll also want to visualize the data to identify trends and gain insights. This unique, tailored approach will give your customers greater control over their insurance costs while helping you to provide more accurate and fair pricing. A basic example data model to support this use case would include customers, the trips they take, the policies they purchase, and the vehicles insured by those policies. This example builds out three MongoDB collections, as well two Materialized Views . The full Hackloade data model which defines all the MongoDB objects within this example can be found here . Part 2: The data pipeline Figure 2: The data pipeline - Usage-based insurance The data processing pipeline component of this example consists of sample data, a daily materialized view, and a monthly materialized view. A sample dataset of IoT vehicle telemetry data represents the motor vehicle trips taken by customers. It’s loaded into the collection named ‘customerTripRaw’ (1) . The dataset can be found here and can be loaded via MongoImport , or other methods. To create a materialized view, a scheduled Trigger executes a function that runs an Aggregation Pipeline. This then generates a daily summary of the raw IoT data, and lands that in a Materialized View collection named ‘customerTripDaily’ (2) . Similarly for a monthly materialized view, a scheduled Trigger executes a function that runs an Aggregation Pipeline that, on a monthly basis, summarizes the information in the ‘customerTripDaily’ collection, and lands that in a Materialized View collection named ‘customerTripMonthly’(3). For more info on these, and other MongoDB Platform Features: MongoDB Materialized Views Building Materialized View on TimeSeries Data MongoDB Scheduled Triggers Cron Expressions Part 3: Automated decisions with Databricks Figure 3: The data pipeline with Databricks - Usage-based insurance The decision-processing component of this example consists of a scheduled trigger and an Atlas Chart. The scheduled trigger collects the necessary data and posts the payload to a Databricks ML Flow API endpoint (the model was previously trained using the MongoDB Spark Connector on Databricks). It then waits for the model to respond with a calculated premium based on the miles driven by a given customer in a month. Then the scheduled trigger updates the ‘customerPolicy’ collection, to append a new monthly premium calculation as a new subdocument within the ‘monthlyPremium’ array. You can then visualize your newly calculated usage-based premiums with an Atlas Chart! In addition to the MongoDB Platform Features listed above, this section utilizes the following: MongoDB Atlas App Services MongoDB Functions MongoDB Charts Go hands on Automated digital underwriting is the future of insurance. In this blog, we introduced how you can build a sample usage-based insurance data model with MongoDB and Databricks. If you want to see how quickly you can build a usage-based insurance model, check out our GitHub repository and dive right in! Learn more about MongoDB and Insurance .

March 6, 2023