Docs Menu
Docs Home
/
MongoDB Atlas
/

Integrate Vector Search with AI Technologies

On this page

  • Key Concepts
  • Frameworks
  • LangChain
  • LlamaIndex
  • Semantic Kernel
  • Services
  • Amazon Bedrock Knowledge Base
  • API Resources

You can use Atlas Vector Search with popular AI providers and LLMs through their standard APIs. MongoDB and partners also provide specific product integrations to help you leverage Atlas Vector Search in your generative AI and AI-powered applications.

This page highlights notable AI integrations that MongoDB and partners have developed. For a complete list of integrations and partner services, see Explore MongoDB Partner Ecosystem.

Large Language Models (LLMs)

You can integrate Atlas Vector Search with LLMs and LLM frameworks to build AI-powered applications. When developing with LLMs, you might encounter the following limitations:

  • Stale data: LLMs are trained on a static dataset up to a certain point in time.

  • No access to local data: LLMs don't have access to local or personal data.

  • Hallucinations: LLMs sometimes generate inaccurate information.

Retrieval-Augmented Generation (RAG)

Retrieval-Augmented Generation (RAG) is an architecture for LLM applications that's designed to address these limitations. In RAG, you perform the following actions:

  1. Store your custom data in a vector database.

  2. Use vector search to retrieve semantically similar documents from the vector database. These documents augment the existing training data that LLMs have access to.

  3. Prompt the LLM. The LLM uses these documents as context to generate a more informed and accurate response.

To learn more, see What is retrieval-augmented generation (RAG)?.

You can integrate Atlas Vector Search with the following open-source frameworks to store custom data in Atlas and implement RAG with Atlas Vector Search.

LangChain is a framework that simplifies the creation of LLM applications through the use of "chains," which are LangChain-specific components that can be combined together for a variety of use cases, including RAG.

To get started, see the following tutorials:

LlamaIndex is a framework that simplifies how you connect custom data sources to LLMs. It provides several tools to help you load and prepare vector embeddings for RAG applications.

To get started, see Get Started with the LlamaIndex Integration.

Microsoft Semantic Kernel is an SDK that allows you to combine various AI services with your applications. You can use Semantic Kernel for a variety of use cases, including RAG.

To get started, see Get Started with the Semantic Kernel Integration.

You can also integrate Atlas Vector Search with the following AI services.

Amazon Bedrock is a fully-managed service for building generative AI applications. You can integrate Atlas Vector Search as a knowledge base for Amazon Bedrock to store custom data in Atlas and implement RAG.

To get started, see Get Started with the Amazon Bedrock Knowledge Base Integration.

Refer to the following API resources as you develop with AI integrations for Atlas Vector Search:

← How to Perform Hybrid Search