Docs Menu
Docs Home
/
Atlas
/ / /

Build an AI Agent with LangGraph.js and Atlas Vector Search

You can integrate MongoDB Atlas with LangGraph.js to build AI agents. This tutorial demonstrates how to build an agent with LangGraph.js and Atlas Vector Search that can answer questions about your data.

Specifically, you perform the following actions:

  1. Set up the environment.

  2. Configure your Atlas cluster.

  3. Build the agent, including the agent tools.

  4. Add memory to the agent.

  5. Create a server and test the agent.

Work with the code for this tutorial by cloning the GitHub repository.

Before you begin, ensure that you have the following:

Note

This tutorial uses models from OpenAI and Anthropic, but you can modify the code to use your models of choice.

You can follow along with this tutorial by watching the video.

Duration: 30 Minutes

To set up the environment, complete the following steps:

1

Create a new project directory, then run the following commands in the project to install the required dependencies:

npm init -y
npm i -D typescript ts-node @types/express @types/node
npx tsc --init
npm i langchain @langchain/langgraph @langchain/mongodb @langchain/langgraph-checkpoint-mongodb @langchain/anthropic dotenv express mongodb zod
2

Create a .env file in your project root and add your API keys and MongoDB Atlas connection string:

OPENAI_API_KEY = "<openai-api-key>"
ANTHROPIC_API_KEY = "<anthropic-api-key>"
MONGODB_ATLAS_URI = "<connection-string>"

Note

Your project uses the following structure:

├── .env
├── index.ts
├── agent.ts
├── seed-database.ts
├── package.json
├── tsconfig.json

In this section, you configure and ingest sample data into your Atlas cluster to enable vector search over your data.

1

If you haven't already, create a cluster and obtain your connection string.

2

Create an index.ts file that establishes a connection to your Atlas cluster:

import { MongoClient } from "mongodb";
import 'dotenv/config';
const client = new MongoClient(process.env.MONGODB_ATLAS_URI as string);
async function startServer() {
try {
await client.connect();
await client.db("admin").command({ ping: 1 });
console.log("Pinged your deployment. You successfully connected to MongoDB!");
// ... rest of the server setup
} catch (error) {
console.error("Error connecting to MongoDB:", error);
process.exit(1);
}
}
startServer();
3

Create a seed-database.ts script to generate and store sample employee records. This script performs the following actions:

  • Defines a schema for employee records.

  • Creates a function to generate sample employee data using the LLM.

  • Processes each record to create a text summary to use for embeddings.

  • Uses the LangChain MongoDB integration to initialize your Atlas cluster as a vector store. This component generates vector embeddings and stores the documents in your hr_database.employees namespace.

4
npx ts-node seed-database.ts
Pinged your deployment. You successfully connected to MongoDB!
Generating synthetic data...
Successfully processed & saved record: EMP001
Successfully processed & saved record: EMP002
Successfully processed & saved record: EMP003
Successfully processed & saved record: EMP004
Successfully processed & saved record: EMP005
Database seeding completed

Tip

After running the script, you can view the seeded data in your Atlas cluster by navigating to the hr_database.employees namespace in the Atlas UI.

5

Follow the steps to create an Atlas Vector Search index for the hr_database.employees namespace. Name the index vector_index and specify the following index definition:

{
"fields": [
{
"numDimensions": 1536,
"path": "embedding",
"similarity": "cosine",
"type": "vector"
}
]
}

In this section, you build a graph to orchestrate the agent's workflow. The graph defines the sequence of steps that the agent takes to respond to a query.

1

Create a new file named agent.ts in your project, then add the following code to begin setting up the agent. You will add more code to the asynchronous function in the subsequent steps.

import { OpenAIEmbeddings } from "@langchain/openai";
import { ChatAnthropic } from "@langchain/anthropic";
import { AIMessage, BaseMessage, HumanMessage } from "@langchain/core/messages";
import { ChatPromptTemplate, MessagesPlaceholder } from "@langchain/core/prompts";
import { StateGraph } from "@langchain/langgraph";
import { Annotation } from "@langchain/langgraph";
import { tool } from "@langchain/core/tools";
import { ToolNode } from "@langchain/langgraph/prebuilt";
import { MongoDBSaver } from "@langchain/langgraph-checkpoint-mongodb";
import { MongoDBAtlasVectorSearch } from "@langchain/mongodb";
import { MongoClient } from "mongodb";
import { z } from "zod";
import "dotenv/config";
export async function callAgent(client: MongoClient, query: string, thread_id: string) {
// Define the MongoDB database and collection
const dbName = "hr_database";
const db = client.db(dbName);
const collection = db.collection("employees");
// ... (Add rest of code here)
}
2

Add the following code to the file to define the graph state:

const GraphState = Annotation.Root({
messages: Annotation<BaseMessage[]>({
reducer: (x, y) => x.concat(y),
}),
});

The state defines the data structure that flows through your agent workflow. Here, the state tracks conversation messages, with a reducer that concatenates new messages to the existing conversation history.

3

Add the following code to define a tool and tool node that uses Atlas Vector Search to retrieve relevant employee information by querying the vector store:

const employeeLookupTool = tool(
async ({ query, n = 10 }) => {
console.log("Employee lookup tool called");
const dbConfig = {
collection: collection,
indexName: "vector_index",
textKey: "embedding_text",
embeddingKey: "embedding",
};
const vectorStore = new MongoDBAtlasVectorSearch(
new OpenAIEmbeddings(),
dbConfig
);
const result = await vectorStore.similaritySearchWithScore(query, n);
return JSON.stringify(result);
},
{
name: "employee_lookup",
description: "Gathers employee details from the HR database",
schema: z.object({
query: z.string().describe("The search query"),
n: z.number().optional().default(10).describe("Number of results to return"),
}),
}
);
const tools = [employeeLookupTool];
const toolNode = new ToolNode<typeof GraphState.State>(tools);
4

Add the following code to the file to determine which model to use for the agent. This example uses a model from Anthropic, but you can modify it to use your preferred model:

const model = new ChatAnthropic({
model: "claude-3-5-sonnet-20240620",
temperature: 0,
}).bindTools(tools);
5

Add the following code to define the functions that the agent uses to process messages and determine whether to continue the conversation:

  1. This function configures how the agent uses the LLM:

    • Constructs a prompt template with system instructions and conversation history.

    • Formats the prompt with the current time, available tools, and messages.

    • Invokes the LLM to generate the next response.

    • Returns the model's response to be added to the conversation state.

    async function callModel(state: typeof GraphState.State) {
    const prompt = ChatPromptTemplate.fromMessages([
    [
    "system",
    `You are a helpful AI assistant, collaborating with other assistants. Use the provided tools to progress towards answering the question. If you are unable to fully answer, that's OK, another assistant with different tools will help where you left off. Execute what you can to make progress. If you or any of the other assistants have the final answer or deliverable, prefix your response with FINAL ANSWER so the team knows to stop. You have access to the following tools: {tool_names}.\n{system_message}\nCurrent time: {time}.`,
    ],
    new MessagesPlaceholder("messages"),
    ]);
    const formattedPrompt = await prompt.formatMessages({
    system_message: "You are helpful HR Chatbot Agent.",
    time: new Date().toISOString(),
    tool_names: tools.map((tool) => tool.name).join(", "),
    messages: state.messages,
    });
    const result = await model.invoke(formattedPrompt);
    return { messages: [result] };
    }
  2. This function determines whether the agent should continue or end the conversation:

    • If the message contains tool calls, route the flow to the tools node.

    • Otherwise, end the conversation and return the final response.

    function shouldContinue(state: typeof GraphState.State) {
    const messages = state.messages;
    const lastMessage = messages[messages.length - 1] as AIMessage;
    if (lastMessage.tool_calls?.length) {
    return "tools";
    }
    return "__end__";
    }
6

Add the following code to define the sequence of steps that the agent takes to respond to a query.

const workflow = new StateGraph(GraphState)
.addNode("agent", callModel)
.addNode("tools", toolNode)
.addEdge("__start__", "agent")
.addConditionalEdges("agent", shouldContinue)
.addEdge("tools", "agent");

Specifically, the agent performs the following steps:

  1. The agent receives a user query.

  2. In the agent node, the agent processes the query and determines whether to use a tool or to end the conversation.

  3. If a tool is needed, the agent routes to the tools node, where it executes the selected tool. The result from the tool are sent back to the agent node.

  4. The agent interprets the tool's output and forms a response or decides on the next action.

  5. This continues until the agent determines that no further action is needed (shouldContinue function returns end).

Diagram that shows the workflow of the LangGraph-MongoDB agent.
click to enlarge

To improve the agent's performance, you can persist its state by using the MongoDB Checkpointer. Persistence allows the agent to store information about previous interactions, which the agent can use in future interactions to provide more contextually relevant responses.

1

Add the following code to your agent.ts file to set up a persistence layer for your agent's state:

const checkpointer = new MongoDBSaver({ client, dbName });
const app = workflow.compile({ checkpointer });
2

Finally, add the following code to complete the agent function to handle queries:

const finalState = await app.invoke(
{
messages: [new HumanMessage(query)],
},
{ recursionLimit: 15, configurable: { thread_id: thread_id } }
);
console.log(finalState.messages[finalState.messages.length - 1].content);
return finalState.messages[finalState.messages.length - 1].content;

In this section, you create a server to interact with your agent and test its functionality.

1

Replace your index.ts file with the following code:

import 'dotenv/config';
import express, { Express, Request, Response } from "express";
import { MongoClient } from "mongodb";
import { callAgent } from './agent';
const app: Express = express();
app.use(express.json());
const client = new MongoClient(process.env.MONGODB_ATLAS_URI as string);
async function startServer() {
try {
await client.connect();
await client.db("admin").command({ ping: 1 });
console.log("Pinged your deployment. You successfully connected to MongoDB!");
app.get('/', (req: Request, res: Response) => {
res.send('LangGraph Agent Server');
});
app.post('/chat', async (req: Request, res: Response) => {
const initialMessage = req.body.message;
const threadId = Date.now().toString();
try {
const response = await callAgent(client, initialMessage, threadId);
res.json({ threadId, response });
} catch (error) {
console.error('Error starting conversation:', error);
res.status(500).json({ error: 'Internal server error' });
}
});
app.post('/chat/:threadId', async (req: Request, res: Response) => {
const { threadId } = req.params;
const { message } = req.body;
try {
const response = await callAgent(client, message, threadId);
res.json({ response });
} catch (error) {
console.error('Error in chat:', error);
res.status(500).json({ error: 'Internal server error' });
}
});
const PORT = process.env.PORT || 3000;
app.listen(PORT, () => {
console.log(`Server running on port ${PORT}`);
});
} catch (error) {
console.error('Error connecting to MongoDB:', error);
process.exit(1);
}
}
startServer();
2

Run the following command to start your server:

npx ts-node index.ts
3

Send sample requests to interact with your agent. Your responses vary depending on your data and the models you use.

Note

The request returns a response in JSON format. You can also view the plaintext output in your terminal where the server is running.

curl -X POST -H "Content-Type: application/json" -d '{"message": "Build a team to make a web app based on the employee data."}' http://localhost:3000/chat
# Sample response
{"threadId": "1713589087654", "response": "To assemble a web app development team, we ideally need..." (truncated)}
# Plaintext output in the terminal
To assemble a web app development team, we ideally need the following roles:
1. **Software Developer**: To handle the coding and backend.
2. **UI/UX Designer**: To design the application's interface and user experience.
3. **Data Analyst**: For managing, analyzing, and visualizing data if required for the app.
4. **Project Manager**: To coordinate the project tasks and milestones, often providing communication across departments.
### Suitable Team Members for the Project:
#### 1. Software Developer
- **John Doe**
- **Role**: Software Engineer
- **Skills**: Java, Python, AWS
- **Location**: Los Angeles HQ (Remote)
- **Notes**: Highly skilled developer with exceptional reviews (4.8/5), promoted to Senior Engineer in 2018.
#### 2. Data Analyst
- **David Smith**
- **Role**: Data Analyst
- **Skills**: SQL, Tableau, Data Visualization
- **Location**: Denver Office
- **Notes**: Strong technical analysis skills. Can assist with app data integration or dashboards.
#### 3. UI/UX Designer
No specific UI/UX designer was identified in the current search. I will need to query this again or look for a graphic designer with some UI/UX skills.
#### 4. Project Manager
- **Emily Davis**
- **Role**: HR Manager
- **Skills**: Employee Relations, Recruitment, Conflict Resolution
- **Location**: Seattle HQ (Remote)
- **Notes**: Experienced in leadership. Can take on project coordination.
Should I search further for a UI/UX designer, or do you have any other parameters for the team?

You can continue the conversation by using the thread ID returned in your previous response. For example, to ask a follow-up question, use the following command. Replace <threadId> with the thread ID returned in the previous response.

curl -X POST -H "Content-Type: application/json" -d '{"message": "Who should lead this project?"}' http://localhost:3000/chat/<threadId>
# Sample response
{"response": "For leading this project, a suitable choice would be someone..." (truncated)}
# Plaintext output in the terminal
### Best Candidate for Leadership:
- **Emily Davis**:
- **Role**: HR Manager
- **Skills**: Employee Relations, Recruitment, Conflict Resolution
- **Experience**:
- Demonstrated leadership in complex situations, as evidenced by strong performance reviews (4.7/5).
- Mentored junior associates, indicating capability in guiding a team.
- **Advantages**:
- Remote-friendly, enabling flexible communication across team locations.
- Experience in managing people and processes, which would be crucial for coordinating a diverse team.
**Recommendation:** Emily Davis is the best candidate to lead the project given her proven leadership skills and ability to manage collaboration effectively.
Let me know if you'd like me to prepare a structured proposal or explore alternative options.

Back

LangGraph.js

On this page