Docs Menu
Docs Home
/ /
Atlas Architecture Center
/ / /

Claims Management Using LLMs and Vector Search for RAG

Discover how to combine MongoDB Atlas Vector Search and Large Language Models (LLMs) to streamline the claims adjustment process.

Use cases: Gen AI, Content Management

Industries: Insurance, Finance, Manufacturing and Mobility, Retail

Products: MongoDB Atlas, MongoDB Atlas Vector Search

Partners: LangChain, FastAPI

One of the biggest challenges for claims adjusters is aggregating information from diverse systems and data formats. Over the years, insurance companies have accumulated terabytes of unstructured data in their datastores, which can help uncover business insights, deliver better customer experiences, and streamline operations. However, many companies fail to capitalize on this.

To help your organization overcome these challenges, you can build a claims management solution with MongoDB that combines Atlas Vector Search and LLMs in a retrieval augmented generation (RAG) system. This framework helps organizations go beyond the limitations of basic foundational models and use their proprietary data to make models context-aware, streamlining operations with AI’s full potential.

MongoDB provides a unified development experience by storing documents alongside their vector embeddings and associated metadata, eliminating the need to retrieve data elsewhere. This allows users to focus on building their application instead of maintaining a separate technology. Ultimately, the data obtained from MongoDB Vector Search is fed to the LLM as context.

The process of the RAG querying flow is as follows:

  1. The user writes a prompt in natural language.

  2. Voyage AI's embedding model vectorizes the prompt.

  3. Atlas Vector Search uses the vectorized prompt to retrieve relevant documents.

  4. LLM uses both the context and original question to generate an answer.

  5. The user receives an answer.

RAG Querying Flow

Figure 1. RAG querying flow

In the demo solution, the data model is a simplified design that emulates real-world insurance claim data. The approach leverages MongoDB's flexible document model to handle the diverse data structure that stores embeddings alongside their related document.

The claims_final collection stores claim information. The relevant fields are the claimDescription field and its corresponding embedding claimDescriptionEmbedding. This embedding is indexed and used to retrieve documents associated with the user prompt. The documents in this collection are as follows:

{
"_id": {
"$oid": "65cc809c76da22d0089dfb2e"
},
"customerID": "c105",
"policyNumber": "p105",
"claimID": "cl105",
"claimStatusCode": "Subrogation",
"claimDescription": "High winds caused ...",
"totalLossAmount": 4200,
"claimFNOLDate": "2023-10-27",
"claimClosedDate": "2024-09-01",
"claimLineCode": "Auto",
"damageDescription": "Roof caved in ...",
"insurableObject": {
"insurableObjectId": "abc105",
"vehicleMake": "Make105",
"vehicleModel": "Model105"
},
"coverages": [
{
"coverageCode": "888",
"description": "3rd party responsible"
},
{
"coverageCode": "777",
"description": "Vehicle rental/loaner service for customer"
}
],
"claimDescriptionEmbedding": [-0.017, ..., 0.011],
"damageDescriptionEmbedding": [-0.047, ..., -0.043],
"photo": "105.jpg",
"photoEmbedding": [9.629, ..., 14.075]
}

For detailed setup instructions, follow the README of this GitHub repository. The instructions guide you through the following steps:

1

Create a new database in MongoDB Atlas called demo_rag_insurance and use the provided dataset demo_rag_insurance_claims.json to create a collection called claims_final.

2

Create and configure an Atlas Vector Search index for claimDescriptionEmbeddingCohere called vector_index_claim_description_cohere. You must structure the search index as follows:

{
"fields": [
{
"type": "vector",
"path": "claimDescriptionEmbeddingCohere",
"numDimensions": 350,
"similarity": "cosine"
}
]
}
3

Set up a virtual environment using Poetry.

4

Start the backend server.

5

Configure environment variables and run the frontend.

You have to run both the front and back end. You’ll access a web UI that allows you to ask questions to the LLM, obtain an answer, and see the reference documents used as context.

To try MongoDB's semantic search tool now, visit the Atlas Vector Search Quick Start guide.

  • Generate Text Embeddings: You can create embeddings using different models and deployment options. It is important to consider privacy and data protection requirements. You can deploy a model locally if your data needs to remain on the servers. Otherwise you can call an API and get your vector embeddings back, as explained in this tutorial. You can use Voyage AI or open-source models.

  • Create Vector Search Indexes: You can build Vector Search indexes in MongoDB Atlas. Alternatively, you can also build indexes for local deployments.

  • Perform a Vector Search Query: You can run Vector Search queries with MongoDB's aggregation pipeline, allowing you to concatenate multiple operations in your workflow. This approach eliminates the need to learn another programming language or change context.

  • Develop a Fast RAG Implementation: You can develop a fast RAG implementation with a LangChain framework that combines MongoDB Atlas Vector Search and LLMs.

  • Luca Napoli, Industry Solutions, MongoDB

  • Jeff Needham, Industry Solutions, MongoDB

Back

Build a PDF Search Application

On this page