Docs Menu
Docs Home
/

Rapid AI Agent Deployment

MongoDB Atlas enables rapid development of AI agents across industries.

Use cases: App Driven-Analytics, Gen AI

Industries: Financial Services, Healthcare, Insurance, Manufacturing and Mobility, Retail, Telecommunications

Products: MongoDB Atlas, MongoDB Atlas Database, MongoDB Atlas Vector Search

Partners: AWS Bedrock, Cohere, LangChain

Companies around the world are incorporating AI agents into their business workflows. The most common use of an AI agent is to assist with research analysis or writing code. LangChain’s recent survey of over 1,000 professionals across multiple industries showed that over 51% have already deployed agents in production.

Production AI agents face three key challenges when scaling beyond basic tasks:

  • Complex data integration and handling: AI agents rely on disparate data sources such as structured logs, unstructured text, and sensor streams. This makes data unification difficult for real-time decision-making. Storing all relevant data in one database speeds up development.

  • High concurrency and low latency: Agents must handle large request volumes and respond quickly. This can overwhelm databases that lack high throughput. While LLM inference adds latency, database performance remains important as agents scale. Production agents run in parallel, make multiple tool calls, and rely on current data for decisions. Slow databases create bottlenecks that increase response time and reduce real-time capability.

  • Data governance and security: AI agents must store and access data securely while maintaining compliance. MongoDB Atlas provides built-in security controls such as client-side field-level encryption, queryable encryption, and auditing capabilities. These features ensure agents only access authorized data and maintain traceability.

This solution presents an agentic framework that offers a flexible foundation to accelerate the development of AI-driven workflows. Rather than offering a fixed set of tools or functionalities, this framework provides a starting point for building agents tailored to specific use cases.

Use this solution to create agents with linear workflows using MongoDB Atlas and LangGraph. For complex use cases, you can extend the framework by adding new tools, nodes, conditions, or workflows.

Figure 1 shows the main components of an agent, which include:

  • Receiving tasks from users or automated triggers

  • Using an LLM to generate responses or follow workflows

  • Using various tools and models to store and retrieve data from MongoDB Atlas

Real-time wind turbine diagnosis

Figure 1. Basic components of an AI agent

This agentic framework executes multi-step diagnostic workflows using LangGraph and generates actionable insights. The framework performs the following operations:

  • Reads time-series data from CSV files or MongoDB Atlas

  • Generates text embeddings

  • Performs vector searches to identify similar past queries

  • Persists sessions and runs data

  • Produces diagnostic recommendations

MongoDB Atlas stores agent profiles, historical recommendations, time-series data, and session logs. This ensures full traceability and enables efficient querying and reusability of past insights.

Real-time wind turbine diagnosis

Figure 2. Agentic AI reference architecture

MongoDB supports agentic AI through its flexibility, performance, and scalability. The document model stores structured, semi-structured, and unstructured data natively and handles diverse datasets. MongoDB enables AI agents to react to new information, maintain internal states in real-time, and learn continuously.

  • Flexible data model: MongoDB stores data such as time series logs, agent profiles, and recommendation outputs in a unified format. Its flexible schema eliminates database redesigns when data requirements change. AI agents can store hierarchical states with nested documents that adapt to changing characteristics and contexts. MongoDB supports versioning and tracking agent evolution, with the ability to pause and resume agent contexts.

  • Vector search: MongoDB Atlas supports native vector search for similarity searches on vector embeddings. This feature matches current queries with historical data, enhances diagnostic accuracy, and provides relevant recommendations. Vector search enables pattern recognition and contextual retrieval, which reduces LLM hallucination. MongoDB handles semantic matching, contextual searches, and multi-dimensional data analysis for AI agent workflows.

  • Scalability and performance: With MongoDB, AI agents can scale horizontally to handle large volumes of real-time data, distribute storage and computational load, and maintain high availability through MongoDB replica sets.

  • Time series collections: MongoDB time series collections ingest large volumes of data efficiently. These collections enable AI agents to track sequential interactions, learning patterns, and state changes over time. This capability maintains context and implements adaptive decision-making. Time series optimizations include automatic data compression, improved storage efficiency, and fast time-based queries. With MongoDB, AI agents can maintain current and historical records without compromising performance or data integrity.

  • Integration: MongoDB integrates with agentic frameworks like LangGraph through JSON-like documents, dynamic schema, and indexing capabilities. This integration enables AI agents to maintain data and memory structures, track multi-step reasoning processes, and implement state management that persists across sessions.

Use the following steps to set up the agentic framework with MongoDB Atlas and LangGraph. For more detailed explanations of the steps, see the GitHub Repository.

1
  1. The data must be relevant to your use case and adhere to the following guidelines:

    • In the CSV file, add a header row with column names.

    • Name the first column timestamp and put timestamps in the format YYYY-MM-DDTHH:MM:SSZ. For example, 2025-02-19T13:00:00Z.

    • Fill the remaining columns with relevant data for your use case.

    • For framework testing, keep the data size as small as possible.

    Sample data:

    timestamp,gdp,interest_rate,unemployment_rate,vix
    2025-02-19T13:00:00Z,2.5,1.75,3.8,15
    2025-02-19T13:05:00Z,2.7,1.80,3.7,18
    2025-02-19T13:10:00Z,2.6,1.85,3.9,22
    2025-02-19T13:15:00Z,2.4,1.70,4.0,10
    2025-02-19T13:20:00Z,2.3,1.65,4.1,20
  2. In the same folder, add a queries file. This file contains the queries that you use to showcase vector search capabilities as part of the agentic workflow. In your file, adhere to the following guidelines:

    • In the CSV file, add a header row with column names.

    • Name the first column query and fill it with the queries.

    • Name the second column recommendation and fill it with the expected recommendations.

    • For framework testing, keep the data size as small as possible.

    Sample data:

    query,recommendation
    GDP growth slowing,Consider increasing bond assets to mitigate risks from potential economic slowdown.
    GDP showing strong growth,Increase equity assets to capitalize on favorable investment conditions.
    Interest rates rising,Shift focus to bond assets as higher rates may impact borrowing-sensitive sectors.
    Interest rates falling,Increase real estate assets to take advantage of lower borrowing costs.
    Unemployment rate increasing,Reduce equity assets to account for potential economic weakness and reduced consumer spending.
    Unemployment rate decreasing,Increase equity assets to benefit from improved economic conditions and corporate profits.
    VIX above 20,Reduce equity assets to manage risks associated with high market volatility.
    VIX below 12,Increase equity assets to capitalize on low market volatility and investor confidence.
    VIX within normal range (12-20),Maintain current asset allocation as market conditions are stable.
    Combination of rising interest rates and high VIX,Focus on bond assets to hedge against market volatility and borrowing cost impacts.
2

In MongoDB Atlas, create a database named agentic_<your-use-case>. For example agentic_macro_indicators. Reflect the name in the environment variables.

Create the following collections:

  • Agent_profiles, for storing agent profiles: You can import some sample data to this collection using this file.

  • Queries, for storing queries: Import the queries from the queries.csv file that you created in Step 1.

3

Go to the config folder and create or update the JSON config.json file. The file must contain the following structure:

{
"CSV_DATA": "data/csv/<YOUR_FILE_NAME>.csv",
"MDB_DATABASE_NAME": "<YOUR_MONGODB_DATABASE_NAME>",
"MDB_TIMESERIES_COLLECTION": "<YOUR_MONGODB_TIMESERIES_COLLECTION_NAME>",
"DEFAULT_TIMESERIES_DATA": [
{
"timestamp": "<DEFAULT_TIMESTAMP_IN_YYYY-MM-DDTHH:MM:SSZ>"
// Your default data here, check config_example.json for better understanding
}
],
"CRITICAL_CONDITIONS": {
// Below is an example of a critical condition for GDP growth
"gdp": {"threshold": 2.5, "condition": "<", "message": "GDP growth slowing: {value}%"}
// Other critical conditions for your use case here, check config_example.json for better understanding
},
"MDB_TIMESERIES_TIMEFIELD": "<YOUR_TIMESTAMP_FIELD_NAME>",
"MDB_TIMESERIES_GRANULARITY": "<YOUR_TIMESERIES_GRANULARITY>",
"MDB_EMBEDDINGS_COLLECTION": "queries", // Using "queries" collection name for storing queries
"MDB_EMBEDDINGS_COLLECTION_VS_FIELD": "query_embedding", // Using "query_embedding" field for storing embeddings
"MDB_VS_INDEX": "<YOUR_MONGODB_DATABASE_NAME>_queries_vs_idx", // Replace <YOUR_MONGODB_DATABASE_NAME> with your MongoDB database name
"MDB_HISTORICAL_RECOMMENDATIONS_COLLECTION": "historical_recommendations", // Using "historical_recommendations" collection name for storing recommendations
"SIMILAR_QUERIES": [
// Below is an example of default similar queries for GDP growth
{
"query": "GDP growth slowing",
"recommendation": "Consider increasing bond assets to mitigate risks from potential economic slowdown."
}
// Other similar queries for your use case here, check config_example.json for better understanding
// This ones are going to be used for the vector search tool in case something is not found in the queries collection
],
"MDB_CHAT_HISTORY_COLLECTION": "chat_history", // Using "chat_history" collection name for storing chat history
"MDB_CHECKPOINTER_COLLECTION": "checkpoints", // Using "checkpoints" collection name for storing checkpoints
"MDB_LOGS_COLLECTION": "logs", // Using "logs" collection name for storing logs
"MDB_AGENT_PROFILES_COLLECTION": "agent_profiles", // Using "agent_profiles" collection name for storing agent profiles
"MDB_AGENT_SESSIONS_COLLECTION": "agent_sessions", // Using "agent_sessions" collection name for storing agent sessions
"AGENT_PROFILE_CHOSEN_ID": "<YOUR_AGENT_PROFILE_ID>", // Replace <YOUR_AGENT_PROFILE_ID> with the agent profile ID you want to use, check config_example.json for better understanding
// Below is an example default agent profile for Portfolio Advisor
"DEFAULT_AGENT_PROFILE": {
"agent_id": "DEFAULT",
"profile": "Default Agent Profile",
"role": "Expert Advisor",
"kind_of_data": "Specific Data",
"motive": "diagnose the query and provide recommendations",
"instructions": "Follow procedures meticulously.",
"rules": "Document all steps.",
"goals": "Provide actionable recommendations."
},
"EMBEDDINGS_MODEL_NAME": "Cohere Embed English V3 Model (within AWS Bedrock)", // Describing the embeddings model used for creating the chain of thought
"EMBEDDINGS_MODEL_ID": "cohere.embed-english-v3", // Model ID for the embeddings model
"CHATCOMPLETIONS_MODEL_NAME": "Anthropic Claude 3 Haiku (within AWS Bedrock)", // Describing the chat completions model used for generating responses
"CHATCOMPLETIONS_MODEL_ID": "anthropic.claude-3-haiku-20240307-v1:0", // Model ID for the chat completions model
// Below is a sample agent workflow graph that uses the tools defined in the agent_tools.py file
// PLEASE BE CAREFUL WHEN MODIFYING THIS GRAPH, CONSIDER THAT THE TOOLS DEFINED IN THE AGENT TOOLS FILE ARE USED HERE AS WELL AS THE IMPORTS
"AGENT_WORKFLOW_GRAPH": {
"nodes": [
{"id": "reasoning_node", "tool": "agent_tools.generate_chain_of_thought_tool"},
{"id": "data_from_csv", "tool": "agent_tools.get_data_from_csv_tool"},
{"id": "process_data", "tool": "agent_tools.process_data_tool"},
{"id": "embedding_node", "tool": "agent_tools.get_query_embedding_tool"},
{"id": "vector_search", "tool": "agent_tools.vector_search_tool"},
{"id": "process_vector_search", "tool": "agent_tools.process_vector_search_tool"},
{"id": "persistence_node", "tool": "agent_tools.persist_data_tool"},
{"id": "recommendation_node", "tool": "agent_tools.get_llm_recommendation_tool"}
],
"edges": [
{"from": "reasoning_node", "to": "data_from_csv"},
{"from": "data_from_csv", "to": "process_data"},
{"from": "process_data", "to": "embedding_node"},
{"from": "embedding_node", "to": "vector_search"},
{"from": "vector_search", "to": "process_vector_search"},
{"from": "process_vector_search", "to": "persistence_node"},
{"from": "persistence_node", "to": "recommendation_node"},
{"from": "recommendation_node", "to": "END"}
],
"entry_point": "reasoning_node"
}
}

Once you update the config file:

  1. Configure the environment variables.

  2. Create vector embeddings.

  3. Create the vector search index.

Real-time wind turbine diagnosis

Figure 3. Logical architecture

Agentic AI introduces software that processes data differently than traditional applications. To leverage agents in business applications, software delivery teams must understand:

  • How agents receive instructions for predetermined workflows

  • How agents access and interact with APIs and databases

  • How agents persist and manage state

These activities form the baseline setup for software delivery teams to deploy agents, regardless of use case, workflow, or industry context.

Frameworks abstract common tasks and components to speed development and deployment while reducing maintenance complexity. This agentic framework helps software delivery teams build and maintain agentic solutions using common, configurable components while teaching teams to develop repeatable patterns and components for long-term scalability and maintenance.

  • Julian Boronat, MongoDB

  • Peyman Parsi, MongoDB

  • Jeff Needham, MongoDB

  • Luca Napoli, MongoDB

  • Humza Akthar, MongoDB

  • Context-Aware RAG for Technical Docs

  • Multi-Agent AI Predictive Maintenance with MongoDB

  • Predictive Maintenance Excellence with MongoDB Atlas

Back

Context-Aware RAG for Technical Documents

On this page