Make the MongoDB docs better! We value your opinion. Share your feedback for a chance to win $100.
Click here >
Docs Menu
Docs Home
/

AI Insurance Underwriter

Deploy an AI Risk Evaluation Agent that helps underwriters deliver instant evaluations with explainable recommendations for approve, decline, and refer scenarios, built with MongoDB Atlas, Atlas Vector Search, and Amazon Bedrock.

Use cases: Artificial Intelligence, Intelligent search, Modernization

Industries: Insurance

Products and tools: MongoDB Atlas, MongoDB Search, MongoDB Vector Search

Partners: Amazon Bedrock, Anthropic, Cohere, LangChain

AI agents are reshaping the insurance industry by automating complex processes, enhancing decision accuracy, and enabling continuous learning across operations. They streamline claims handling, underwriting, and customer service through autonomous collaboration and real-time data analysis. Instead of relying on static models, insurers can now deploy adaptive agent networks that personalize products, detect fraud early, and improve risk assessment. This shift creates faster and smarter insurance ecosystems that better serve customers and quickly adapt to market changes.

This solution shows an AI-driven workflow that accelerates the insurance underwriting process. The solution combines MongoDB Atlas for data persistence, high-performance vector search, and Large Language Models (LLMs) from Cohere and Anthropic to achieve real-time, consistent, and explainable risk assessments. This approach replaces traditional manual reviews, improving operational efficiency and compliance.

This solution uses two main architectures to build an advanced AI Risk Evaluation System for insurance quotes with MongoDB Atlas as the central hub:

  • The Agentic Underwriter Architecture, which acts on the data itself.

  • The RAG Architecture, a text-based chat assistant, which reads from documents containing underwriting guidelines and quotes.

Chatbot: Underwriting Process: Agentic Architecture

Figure 1. Underwriting Process: Agentic Architecture

This architecture generates an underwriting report using the following workflow:

  1. Embedding generation (Quote to vector):

    • Action: Incoming raw insurance quote data including policy details, driver background, and vehicle specifications, is immediately processed.

    • Technology: Cohere's language models convert the textual and structured quote information into numerical vector representations called embeddings. This step translates the complex quote context into a format optimized for semantic search.

  2. Vector Search and rule retrieval (Sub-second matching):

    • Action: A high-search uses the generated quote as the query.

    • Technology: MongoDB Atlas Vector Search executes semantic search against an index of established underwriting rules, regulatory guidelines, and risk patterns stored in MongoDB. This action quickly retrieves the most relevant, context-specific rules for the quote, ensuring compliance and accuracy.

  3. Flexible data storage (Unified data persistence):

    • Action: The system stores all relevant data, including the original structured quote information, the generated vector embedding, and the retrieved underwriting rules together.

    • Technology: MongoDB's flexible document model stores diverse data types, such as structured, unstructured, and vector data, in a single, unified document. This capacity eliminates the need for complex, slow joins across multiple database systems, streamlining the entire risk assessment pipeline.

  4. AI-powered risk assessment (Systematic evaluation):

    • Action: The system sends the complete contextual payload, which includes the original quote data and relevant rules, to a generative AI model for evaluation.

    • Technology: Anthropic's Claude model performs a systematic risk evaluation. The model analyzes factors such as driver history, vehicle safety ratings, policy limits, and the retrieved rule set to determine the overall risk profile and adherence to internal policies.

  5. Structured output and consistency (Actionable results):

    • Action: The model returns a structured evaluation result.

    • Technology: The model returns a standardized JSON object. This output includes a numerical risk score (for example, 1-100), a concise explanation for the score, and the final decision, such as "Approve", "Refer", or "Decline". MongoDB stores this structured data using atomic write operations, guaranteeing data consistency.

The entire process from quote ingestion to final score-back is completed in under 10 seconds, providing the following benefits:

  • Efficiency gain: This solution replaces legacy tasks that traditionally take 30 to 60 minutes of intensive, manual effort by underwriters.

  • Explainable and consistent decisions: Underwriters can use AI algorithms to generate structured output and rationales with consistent, explainable, and compliant risk assessments.

  • High-performance foundation: The solution uses MongoDB's high-performance querying and indexing capabilities, particularly Atlas Vector Search, to ensure real-time decision-making and better customer experiences.

  • Competitive advantage: This acceleration enables insurers to provide immediate quotes and policy issuance, giving them a competitive edge in the insurance market.

This feature provides an intelligent, context-aware chatbot designed to provide real-time assistance to insurance agents and underwriters. This conversational interface enhances efficiency and accuracy in the underwriting process.

Chatbot: RAG Architecture

Figure 2. Chatbot: RAG Architecture

MongoDB serves two primary functions to the solution’s operation:

  1. Conversation state and contextual data management: MongoDB maintains the continuity and relevance of the chat session, storing the conversation state and necessary contextual data.

  2. Dynamic contextual data retrieval: When a user poses a question, MongoDB's Aggregation Pipeline executes a single, highly efficient call. This pipeline is crucial for dynamically gathering all relevant data required for the response, including:

    • Current Quote details

    • Applicable Underwriting Rules

    • Session-specific information

MongoDB's flexible schema model allows the chatbot to store, access, and correlate a wide array of diverse data types that are typically siloed in traditional systems, including:

  • Structured fields: Standard policy and risk data.

  • Unstructured PDFs: Policy documents, reports, and submitted forms.

  • Vector embeddings: Semantic representations of documents and data for similarity search and retrieval.

  • Conversation history: The full record of the current and past user interactions.

This capability to harmonize diverse data structures ensures that the chatbot's LLM component receives all relevant information.

MongoDB consolidates the enriched context and securely sends it to the LLM, specifically the Anthropic Claude Model, through AWS Bedrock. It enables the chatbot to:

  • Explain risk: Provide clear, concise explanations of complex risk factors.

  • Clarify coverage: Offer precise interpretations of policy coverage and exclusions.

  • Guide underwriting decisions: Suggest optimal paths and highlight compliance requirements to facilitate faster, more informed underwriting decisions.

To replicate this solution, check its GitHub repository. Follow the repository's README, which covers the following steps in more detail.

1

Load Sample Data into your MongoDB Atlas account in a collection called quotes. The sample data is present in the Sample_Data.md. You can do this by copying and inserting it directly into the Mongodb Collection.

This sample data represents quotes from auto and home insurance.

2

From the command line or Mongodb Compass UI, go to the Indexes tab and create a new Vector Index with the following structure:

{
"fields": [
{
"type": "vector",
"path": "vector",
"numDimensions": 1024,
"similarity": "cosine"
}
]
}
3

Use the uv run main command from the backend folder. This will load the backend where all the data processing will be executed.

4

Use the npm start command from the frontend folder. This will load the UI elements.

5

Fetch all the quotes or find a specific one using the search functionality and generate reports using the Generate Report button from the UI, this will execute the Underwriting Agent functionalities.

  • Design Insurance native assistants: Build domain-specific engineering structured prompts that embed insurance context, rule references, and clear output formats to make LLMs behave like insurance-native assistants. This improves answer quality, reduces hallucinations, and makes AI outputs easier to plug into downstream workflows.

  • Customize retrieval systems: Optimize vector search with metadata-based routing and context injection to tailor retrieval on insurance hierarchies and rule relationships. This delivers more relevant results, improving the performance of RAG and search-driven experiences.

  • Enhance document representation: Generate content-aware embeddings and optimize document chunking strategy to ensure each document is represented in a way that fits its structure and purpose. This enhances the accuracy and efficiency of RAG pipelines over mixed insurance content.

  • Jeff Needham, MongoDB

  • Albert Cortez, MongoDB

  • Agentic AI Processing

  • Automating Digital Underwriting

  • Claim Management for RAG

Back

AI-Enhanced Claim Adjustment

On this page