Docs Menu
Docs Home
/
Atlas
/ /

Integrate MongoDB with LangGraph

You can integrate MongoDB with LangGraph to build AI agents and advanced RAG applications. This page provides an overview of the MongoDB LangGraph integration and how you can use MongoDB for both retrieval and agent state persistence in your LangGraph workflows.

Get Started

Note

For the JavaScript integration, see LangGraph JS/TS.

LangGraph is a specialized framework within the LangChain ecosystem designed for building AI agents and complex multi-agent workflows. Graphs are the core components of LangGraph, representing the workflow of your agent. The MongoDB LangGraph integration enables the following capabilities:

  • Retrieval Tools: You can use the MongoDB LangChain integration to quickly create retrieval tools for your LangGraph workflows.

  • MongoDB Checkpointer: You can persist the state of your LangGraph agents in MongoDB, providing conversation memory and fault tolerance.

Integrating your LangGraph applications with MongoDB allows you to consolidate both retrieval capabilities and agent state persistence in a single database, simplifying your architecture and reducing operational complexity.

You can seamlessly use LangChain retrievers as tools in your LangGraph workflow to retrieve relevant data from Atlas.

The MongoDB LangChain integration natively supports full-text search, vector search, hybrid search, and parent-document retrieval. For a complete list of retrieval methods, see MongoDB LangChain Retrievers.

  1. To create a basic retrieval tool with Atlas Vector Search and LangChain:

    from langchain.tools.retriever import create_retriever_tool
    from langchain_mongodb.vectorstores import MongoDBAtlasVectorSearch
    from langchain_voyageai import VoyageAIEmbeddings
    # Instantiate the vector store
    vector_store = MongoDBAtlasVectorSearch.from_connection_string(
    connection_string = "<connection-string>", # Atlas cluster or local deployment URI
    namespace = "<database-name>.<collection-name>", # Database and collection name
    embedding = VoyageAIEmbeddings(), # Embedding model to use
    index_name = "vector_index", # Name of the vector search index
    # Other optional parameters...
    )
    # Create a retrieval tool
    retriever = vector_store.as_retriever()
    retriever_tool = create_retriever_tool(
    retriever,
    "vector_search_retriever", # Tool name
    "Retrieve relevant documents from the collection" # Tool description
    )
  2. To add the tool as a node in LangGraph:

    1. Convert the tool into a node.

    2. Add the node to the graph.

    from langgraph.graph import StateGraph
    from langgraph.prebuilt import ToolNode
    # Define the graph
    workflow = StateGraph()
    # Convert the retriever tool into a node
    retriever_node = ToolNode([retriever_tool])
    # Add the tool as a node in the graph
    workflow.add_node("vector_search_retriever", retriever_node)
    graph = workflow.compile()

The MongoDB Checkpointer allows you to persist your agent's state in MongoDB. This feature enables human-in-the-loop, memory, time travel, and fault-tolerance for your LangGraph agents.

from langgraph.checkpoint.mongodb import MongoDBSaver
from pymongo import MongoClient
# Connect to your Atlas cluster or local Atlas deployment
client = MongoClient("<connection-string>")
# Initialize the MongoDB checkpointer
checkpointer = MongoDBSaver(client)
# Instantiate the graph with the checkpointer
app = graph.compile(checkpointer=checkpointer)

Back

LangChain JS/TS

On this page