- Use cases: Analytics, Gen AI, Modernization, Personalization
- Industries: Insurance, Financial Services, Telecommunications, Healthcare, Retail
- Products and tools: MongoDB Atlas, Vector Search, Atlas Search
- Partners: Dataworkz, AWS, Cohere, Langchain
Customer satisfaction is critical for insurance companies. Studies have shown that companies with superior customer experiences consistently outperform their peers. In fact, McKinsey found that life and property/casualty insurers with superior customer experiences saw a significant 20% and 65% increase in total shareholder return, respectively, over five years.
A satisfied customer is a loyal customer. They are 80% more likely to renew their policies, directly contributing to sustainable growth. However, one major challenge faced by many insurance companies is the inefficiency of their call centers. Agents often struggle to quickly locate and deliver accurate information to customers, leading to frustration and dissatisfaction.
This solution illustrates how MongoDB can transform call center operations. By converting call recordings into searchable vectors (numerical representations of data points in a multi-dimensional space), businesses can quickly access relevant information and improve customer service. We'll dig into how the integration of Amazon Transcribe, Cohere, and MongoDB Atlas Vector Search is achieving this transformation.
Customer service interactions are goldmines of valuable insights. By analyzing call recordings, we can identify successful resolution strategies and uncover frequently asked questions. In turn, by making this information—which is often buried in audio files—accessible to agents, it can give customers faster and more accurate assistance.
However, the vast volume and unstructured nature of these audio files make it challenging to extract actionable information efficiently.
To address this challenge, we propose a pipeline that leverages AI and analytics to transform raw audio recordings into into vectors as shown in Figure 1:
Once the data is stored in vector format within the operational data store, it becomes accessible for real-time applications. This data can be consumed directly through vector search or integrated into a retrieval-augmented generation (RAG) architecture, a technique that combines the capabilities of large language models (LLMs) with external knowledge sources to generate more accurate and informative outputs.
Now that we’ve looked at the components of the pre-processing pipeline, let’s explore the proposed real-time system architecture in detail. It comprises the following modules and functions (see Figure 2):
The proposed architecture is simple but very powerful, easy to implement, and effective. Moreover, it can serve as a foundation for more advanced use cases that require complex interactions, such as agentic workflows, and iterative and multi-step processes that combine LLMs and hybrid search to complete sophisticated tasks.
This solution not only impacts human operator workflows but can also underpin chatbots and voicebots, enabling them to provide more relevant and contextual customer responses.
Create this demo for yourself by following the instructions and associated models in this solution’s repository.
Learn how MongoDB and AI are transforming insurance call centers.
Discover how leading industries are transforming with AI and MongoDB Atlas.