You can integrate MongoDB with LangGraph.js to add long-term memory to your agents in addition to the short-term memory you get from the MongoDB LangGraph Checkpointer.
This page builds on the concepts from the following tutorials:
Use this page to learn how to:
Understand short-term in contrast to long-term memory in LangGraph.
Configure a MongoDB-backed long-term memory store for LangGraph.js.
Persist and retrieve user-specific data across threads and sessions.
Combine long-term memory with MongoDB Vector Search and Voyage AI embeddings with AutoEmbeddings for semantic recall.
Note
The examples on this page use TypeScript-style syntax. You can adapt them for plain JavaScript by removing type annotations.
Overview of MongoDB Memory Mechanisms in LangGraph
LangGraph provides two complementary memory mechanisms:
Short-Term Memory (Checkpoints)
Uses a checkpointer
to persist state for a single thread (conversation) across requests.
In the MongoDB integration, this is handled by the MongoDB LangGraph
Checkpointer (MongoDBSaver). This enables features like time travel,
human-in-the-loop review, and fault tolerance for a given conversation.
Long-Term Memory (Stores)
Uses a Store abstraction to persist data across threads, not just within a single conversation. Ideal for storing data that should survive both between sessions and across agents.
With the MongoDB Store for LangGraph.js, you can:
Implement the
BaseStoreinterface in JavaScript/TypeScript with MongoDB as the backend.Use the standard Store API (
get,put,delete,search) from LangGraph.Store JSON documents under hierarchical namespaces (for example,
[userId, "memories"]).Perform semantic search and metadata filtering over your stored data, backed by MongoDB Vector Search and Voyage AI embeddings with AutoEmbeddings.
Prerequisites
Before you begin, ensure that you have the following:
One of the following MongoDB cluster types:
An Atlas cluster running MongoDB version 6.0.11, 7.0.2, or later. Ensure that your IP address is included in your Atlas project's access list.
A local Atlas deployment created using the Atlas CLI. To learn more, see Create a Local Atlas Deployment.
A MongoDB Community or Enterprise cluster with Search and Vector Search installed.
npm and Node.js installed.
A Voyage AI API key. To create an API key, see Model API Keys.
An OpenAI API Key. You must have an OpenAI account with credits available for API requests. To learn more about registering an OpenAI account, see the OpenAI API website.
A MongoDB database and collection configured for MongoDB Vector Search if you plan to use semantic search. To learn more, see Create an Atlas Vector Search Index.
You should be familiar with and ideally have completed:
Integrate MongoDB with LangGraph.js (overview, checkpointer, and retrieval tools).
Build an AI Agent with LangGraph.js and MongoDB Atlas (end-to-end agent, vector search, and short-term memory).
Install Dependencies
Install the core dependencies used in the Build an AI Agent tutorial, plus the MongoDB Store module for LangGraph.js.
npm init -y # Core LangChain / LangGraph / MongoDB dependencies npm i --legacy-peer-deps \ langchain \ @langchain/langgraph \ @langchain/mongodb \ @langchain/community \ @langchain/langgraph-checkpoint-mongodb \ dotenv \ express \ mongodb \ zod
The @langchain/langgraph-checkpoint-mongodb package includes both
the checkpointer and the MongoDB Store for long-term memory.
Configure the MongoDB Store
LangGraph Stores implement the BaseStore interface with operations
such as get, put, delete, and search for JSON
documents under hierarchical namespaces.
Initialize the MongoDB Client
Reuse the MongoDB client configuration from your existing LangGraph.js agent server.
// mongodb-client.ts import { MongoClient } from "mongodb"; import "dotenv/config"; export const client = new MongoClient(process.env.MONGODB_URI as string); export async function connectClient() { await client.connect(); await client.db("admin").command({ ping: 1 }); console.log("Pinged your deployment. You successfully connected to MongoDB!"); }
Create a MongoDBStore Instance
Create a module that exports a configured MongoDBStore for
long-term memory.
// long-term-store.ts import type { MongoClient } from "mongodb"; import { MongoDBStore } from "@langchain/langgraph-checkpoint-mongodb"; const DB_NAME = "hr_database"; const COLLECTION_NAME = "long_term_memory"; export function createMongoDBStore(client: MongoClient) { const store = new MongoDBStore({ client, dbName: DB_NAME, collectionName: COLLECTION_NAME, }); return store; }
This store persists data across threads and sessions, keyed by
hierarchical namespaces such as [userId, "memories"] or
[userId, "preferences"].
Design Your Long-Term Memory Schema
Long-term memory works best when you model stable facts about your users or domain.
Common patterns include:
User profiles - Role, seniority, team, location. Persistent preferences (for example, tone, language, product tier).
Constraints and policies - Allergies, compliance restrictions, budget caps.
Interaction history summaries - High-level summaries of past sessions. Decisions that should carry forward (for example, "user prefers MongoDB Atlas over self-managed").
You can store these either as simple JSON documents or as documents with embeddings for semantic search.
Persist User Memories
The MongoDB Store API supports the following operations:
put- store or update a value.get- retrieve a value by namespace and key.delete- remove a value.search- retrieve values by vector similarity and/or metadata filters.
The exact TypeScript types might differ, but the following code demonstrates a typical usage pattern in LangGraph.js:
// memory-api.ts import type { MongoDBStore } from "@langchain/langgraph-checkpoint-mongodb"; type MemoryValue = { type: "profile" | "preference" | "fact"; data: Record<string, unknown>; updatedAt: string; }; export async function putUserMemory( store: MongoDBStore, userId: string, key: string, value: MemoryValue ) { await store.put( [userId, "memories"], key, value ); } export async function getUserMemories( store: MongoDBStore, userId: string, key: string ) { const result = await store.get( [userId, "memories"], key ); return result; }
Add Long-Term Memory to Your Agent Workflow
This section assumes you already have an agent workflow similar to the one described in Build an AI Agent with LangGraph.js and MongoDB Atlas, including:
A
GraphStateannotation with amessagesfield.A tool node that calls MongoDB Vector Search using embeddings from Voyage AI.
A chat model node that decides whether to call tools or respond directly.
A
MongoDBSavercheckpointer for short-term memory.
Inject the Store into Your Agent Function
Update your callAgent function to accept a store alongside the
MongoDB client:
// agent-with-long-term-memory.ts import { ChatOpenAI } from "@langchain/openai"; import { AIMessage, BaseMessage, HumanMessage, } from "@langchain/core/messages"; import { ChatPromptTemplate, MessagesPlaceholder, } from "@langchain/core/prompts"; import { StateGraph, Annotation } from "@langchain/langgraph"; import { tool } from "@langchain/core/tools"; import { ToolNode } from "@langchain/langgraph/prebuilt"; import { MongoDBSaver } from "@langchain/langgraph-checkpoint-mongodb"; import type { MongoClient } from "mongodb"; import type { MongoDBStore } from "@langchain/langgraph-checkpoint-mongodb"; import { putUserMemory, getUserMemories } from "./memory-api"; export async function callAgent( client: MongoClient, store: MongoDBStore, query: string, threadId: string, userId: string, ) { const dbName = "hr_database"; const db = client.db(dbName); const collection = db.collection("employees"); // 1. Retrieve long-term memories for the user const existingMemories = await getUserMemories(store, userId, "last_response"); // 2. Define graph state const GraphState = Annotation.Root({ messages: Annotation<BaseMessage[]>({ reducer: (x, y) => x.concat(y), }), userId: Annotation<string>(), memories: Annotation<unknown | null>(), }); // 3. Define tools (for example, MongoDB Vector Search retriever) const employeeLookupTool = tool(/* ...reuse from Build an AI Agent tutorial... */); const tools = [employeeLookupTool]; const toolNode = new ToolNode<typeof GraphState.State>(tools); // 4. Configure the chat model const model = new ChatOpenAI({ model: "gpt-5.4-mini", }).bindTools(tools); // 5. Define the model node async function callModel(state: typeof GraphState.State) { const prompt = ChatPromptTemplate.fromMessages([ "system", `You are a helpful HR chatbot agent. Use the provided tools and long-term memories to answer questions. Long-term memories (if any) are in the "memories" field.`, new MessagesPlaceholder("messages"), ]); const formatted = await prompt.formatMessages({ messages: state.messages, }); const result = await model.invoke(formatted); return { messages: [result] }; } // 6. Define routing logic function shouldContinue(state: typeof GraphState.State) { const messages = state.messages; const lastMessage = messages[messages.length - 1] as AIMessage; if (lastMessage.tool_calls?.length) { return "tools"; } return "__end__"; } // 7. Build the workflow const workflow = new StateGraph(GraphState) .addNode("agent", callModel) .addNode("tools", toolNode) .addEdge("__start__", "agent") .addConditionalEdges("agent", shouldContinue) .addEdge("tools", "agent"); // 8. Configure short-term memory (checkpointer) const checkpointer = new MongoDBSaver({ client, dbName }); const app = workflow.compile({ checkpointer, store, }); // 9. Invoke the graph with both short-term and long-term memory const finalState = await app.invoke( { messages: [new HumanMessage(query)], userId, memories: existingMemories, }, { recursionLimit: 15, configurable: { thread_id: threadId }, }, ); const last = finalState.messages[finalState.messages.length - 1]; const content = last.content; // 10. Optionally, update long-term memory with new facts await putUserMemory(store, userId, "last_response", { type: "fact", data: { content }, updatedAt: new Date().toISOString(), }); return content; }
This pattern lets you:
Read long-term memory at the start of each interaction.
Inject those memories into the state (for example, as
memoriesor into the system prompt).Update long-term memory after each interaction based on what the agent learned.
Use Semantic Search and AutoEmbeddings for Long-Term Memory
To make long-term memory searchable by meaning, you can:
Store each memory as a document that includes:
A text summary or description.
Structured metadata (for example,
userId,type,tags).An embedding vector for semantic search.
Configure a MongoDB Vector Search index on the
embeddingfield for your memory collection.Use Voyage AI embeddings or Voyage AutoEmbeddings (through the Atlas Embedding and Reranking API) to generate embeddings for each memory document before inserting it.
Use the built-in
searchmethod onMongoDBStore, which calls$vectorSearchon your memory collection, optionally combined with metadata filters such asuserIdandtype.
The following code demonstrates how to use the built-in search
method on MongoDBStore to find memories by semantic similarity
and metadata filters:
// semantic-memory.ts import type { MongoDBStore } from "@langchain/langgraph-checkpoint-mongodb"; export async function searchUserMemories( store: MongoDBStore, userId: string, queryText: string, limit = 5, ) { const results = await store.search( [userId, "memories"], { query: queryText, filter: { type: "fact" }, limit, }, ); return results; }
The search method accepts a namespace prefix and an options object
with the following fields:
query- A text string for semantic search. The store uses MongoDB Vector Search to find memories that are semantically similar to the query.filter- A metadata filter object to narrow results (for example,{ type: "fact" }).limit- The maximum number of results to return (default:10).offset- The number of results to skip for pagination (default:0).
Note
With Voyage AutoEmbeddings, you can offload embedding generation to the Atlas Embedding and Reranking API and use MongoDB Vector Search as your storage backend. The LangGraph.js Long-Term Memory Store is designed to work with this experience so that you can combine automatic embedding generation with LangGraph Stores and Vector Search for a unified long-term memory layer.
When to Use Short-Term vs Long-Term Memory
Use the following guidelines to choose the right memory mechanism for each part of your application:
Mechanism | When to Use | Details |
|---|---|---|
Short-Term Memory (Checkpointer) | Use for per-thread context that only matters within a conversation. | Ideal for step-by-step reasoning, tool call results, and
intermediate state. Backed by |
Long-Term Memory (Store) | Use for cross-thread information that should persist over time. | Ideal for user profiles, policies and constraints, long-lived
facts, and semantic recall. Backed by the MongoDB Store
implementation of |
In many real-world applications, you will:
Use short-term memory to keep the current conversation coherent.
Use long-term memory to remember the user and important facts from past conversations.
Use MongoDB Vector Search and Voyage AI embeddings with AutoEmbeddings to search memories semantically when generating responses.
Next Steps
Follow the end-to-end tutorial in Build an AI Agent with LangGraph.js and MongoDB Atlas.
Use this page as a reference to layer long-term memory and semantic search on top of your existing agent.
Explore the Voyage AI docs and the Atlas Embedding and Reranking API for more details on embeddings and auto-embedding workflows.
MongoDB also provides the following developer resources: