Docs Menu
Docs Home
/ /

Integrate MongoDB with Mastra

You can integrate MongoDB with Mastra to build AI agents. Mastra is an open-source TypeScript agent framework that provides primitives for building AI applications, including workflows, RAG, and evals.

Important

This integration is community-maintained. To learn more, see Mastra documentation or the Mastra GitHub repository.

You can use MongoDB with Mastra to build AI agents. By combining MongoDB Vector Search with Mastra's agent framework, you can implement the following capabilities for your agents:

  • Store and retrieve vector embeddings using MongoDB as your vector database

  • Filter your vector search results using MongoDB query syntax

  • Implement RAG as a tool in your agents

  • Store your agent's memory in MongoDB

To use MongoDB with Mastra, install the @mastra/mongodb package:

npm install @mastra/mongodb

To get started with Mastra and learn how to create a project, see Install Mastra.

MongoDB is a supported database <https://mastra.ai/en/docs/rag/vector-databases#supported-databases> in Mastra. The MongoDBVector class allows you to store and retrieve vector embeddings from MongoDB. You can use this component to implement RAG by storing embeddings from your data and retrieving them using MongoDB Vector Search.

This component requires that you create an MongoDB Vector Search Index.

To use the MongoDB vector store with Mastra, import the MongoDBVector class, create an object with the class, and specify your MongoDB connection details. For example:

import { MongoDBVector } from '@mastra/mongodb'
// Instantiate MongoDB as a vector store
const mongoVector = new MongoDBVector({
uri: process.env.MONGODB_URI, // MongoDB connection string
dbName: process.env.MONGODB_DATABASE // Database name
})

This section highlights the most relevant methods for working with MongoDB as a vector store. For a full list of methods, see the Mastra documentation.

Before you can search your embeddings, you must create a vector search index on your collection. The dimension parameter must match the number of dimensions required by your embedding model.

// Create a vector search index
await mongoVector.createIndex({
indexName: "vector_index", // Name of the index
dimension: 1536, // Must match your embedding model's dimensions
});

After creating an index, you can store vector embeddings with associated metadata. For a complete example, see the Mastra upsert embeddings example.

import { openai } from "@ai-sdk/openai";
import { MongoDBVector } from "@mastra/mongodb";
import { MDocument } from "@mastra/rag";
import { embedMany } from "ai";
// Create a document from text
const doc = MDocument.fromText("Your text content...");
// Split document into chunks
const chunks = await doc.chunk();
// Generate embeddings for each chunk
const { embeddings } = await embedMany({
values: chunks.map(chunk => chunk.text), // Text content to embed
model: openai.embedding("text-embedding-3-small"), // Embedding model
});
// Instantiate MongoDB as a vector store
const mongoVector = new MongoDBVector({
uri: process.env.MONGODB_URI, // MongoDB connection string
dbName: process.env.MONGODB_DB_NAME, // Database name
});
// Store vector embeddings with metadata
await mongoVector.upsert({
indexName: "vector_index", // Name of the vector search index
vectors: embeddings, // Array of vector embeddings
metadata: chunks?.map(chunk => ({ text: chunk.text })), // Associated metadata for each embedding
});

To retrieve semantically similar documents, first convert your query to an embedding, then query the vector store. To learn more, see Retrieval in Mastra.

import { openai } from "@ai-sdk/openai";
import { embed } from "ai";
import { MongoDBVector } from "@mastra/mongodb";
// Convert query to embedding
const { embedding } = await embed({
value: "What are the main points in the article?", // Query text
model: openai.embedding("text-embedding-3-small"), // Embedding model
});
// Instantiate MongoDB as a vector store
const mongoVector = new MongoDBVector({
uri: process.env.MONGODB_URI, // MongoDB connection string
dbName: process.env.MONGODB_DATABASE // Database name
});
// Query the vector store for similar documents
const results = await mongoVector.query({
indexName: "vector_index", // Name of the vector search index
queryVector: embedding, // Query embedding vector
topK: 10, // Number of results to return
});
// Display results
console.log(results);

The MongoDB vector store supports metadata filtering on your vector search query results:

  • Use the Mastra query syntax, without limitations.

  • Use standard comparison, array, logical, and element operators

  • Use mested fields and arrays in metadata

  • Filter on metadata and the contents of the original documents

The following usage example demonstrates the filtering syntax:

// Query with metadata filters
const results = await mongoVector.query({
indexName: "vector_index", // Name of the vector search index
queryVector: queryVector, // Query embedding vector
topK: 10, // Number of results to return
filter: {
category: "electronics", // Simple equality filter
price: { $gt: 100 }, // Numeric comparison
tags: { $in: ["sale", "new"] }, // Array membership
},
});

Tip

For optimal performance, create indexes on metadata fields that you frequently filter on. To learn more, see MongoDB Vector Search Indexes.

To learn more, see Metadata Filters.

You can use MongoDB as a vector store within Mastra AI agents to implement agentic RAG. This allows your agents to use MongoDB vector search as a tool to help complete tasks.

Mastra also provides a MONGODB_PROMPT constant that you can include in your agent instructions to optimize how the agent uses MongoDB for retrieval. To learn more, see Vector Store Prompts.

The following example shows how to create an AI agent with RAG capabilities using MongoDB as a vector store:

import { Agent } from '@mastra/core/agent';
import { openai } from '@ai-sdk/openai';
import { MONGODB_PROMPT } from "@mastra/mongodb";
import { createVectorQueryTool } from "@mastra/rag";
// Create a vector query tool for the agent
const vectorQueryTool = createVectorQueryTool({
vectorStoreName: "mongoVector", // Name of MongoDB vector store
indexName: "vector_index", // Name of Vector Search index
model: openai.embedding("text-embedding-3-small"), // Embedding model
});
// Define an AI agent with RAG capabilities
export const ragAgent = new Agent({
name: 'RAG Agent', // Agent name
model: openai('gpt-4o-mini'), // LLM model
instructions: `
Process queries using the provided context. Structure responses to be concise and relevant.
${MONGODB_PROMPT}
`,
tools: { vectorQueryTool }, // Tools available to the agent
});

You can use Mastra's memory system with MongoDB as the storage backend. This allows you your agent to remember past interactions and use that information to inform future decisions.

For a complete tutorial, see Memory with MongoDB.

The following example demonstrates how to use memory with AI agents. This example uses memoryOptions to scope recall for the request. Set lastMessages: 5 to limit recency-based recall. Use semanticRecall to fetch the topK: 3 most relevant messages, including messageRange: 2 neighboring messages for context around each match.

import "dotenv/config";
import { mastra } from "./mastra";
const threadId = "123";
const resourceId = "user-456";
const agent = mastra.getAgent("mongodbAgent");
const message = await agent.stream("My name is Mastra", {
memory: {
thread: threadId,
resource: resourceId
}
});
await message.textStream.pipeTo(new WritableStream());
const stream = await agent.stream("What's my name?", {
memory: {
thread: threadId,
resource: resourceId
},
memoryOptions: {
lastMessages: 5,
semanticRecall: {
topK: 3,
messageRange: 2
}
}
});
for await (const chunk of stream.textStream) {
process.stdout.write(chunk);
}

To learn more about using Mastra with MongoDB, see:

Back

Build an AI Agent

On this page