Esta página muestra cómo usar el servidor MCP de MongoDB mediante ejemplos prácticos con indicaciones en lenguaje natural. Cada ejemplo muestra la indicación que puede introducir en su cliente de IA y una respuesta de ejemplo que podría recibir del modelo de lenguaje grande (LLM).
Puede utilizar los ejemplos de esta página como puntos de partida para sus propias interacciones con el servidor MongoDB MCP.
Nota
El resultado exacto que reciba variará según sus datos, el cliente de IA y el LLM que utilice. En estos ejemplos, se omite información privada como los ID de la organización, los ID del proyecto y las contraseñas.
Requisitos previos
Antes de ejecutar estos ejemplos, asegúrese de tener lo siguiente:
Un trabajador Configuración del servidor MongoDB MCP
Acceso a una implementación de MongoDB con datos para explorar.
Datos de muestra cargados en su implementación.
Administración del Atlas
Los siguientes ejemplos demuestran cómo administrar su infraestructura Atlas mediante el servidor MongoDB MCP.
Ver organizaciones y proyectos
Obtenga una descripción general de la estructura de su cuenta Atlas y los recursos disponibles.
Show my Atlas organizations and projects
Ran ``atlas-list-orgs`` Ran ``atlas-list-projects`` Here are your MongoDB Atlas organizations and projects: **Organizations:** TestOrganization Organization ID: XXX **Projects:** TestProject Project ID: XXX Organization: TestOrganization Created: August 7, 2020 at 11:48:55 AM You have one organization containing one project. This project contains your TestCluster that we're currently connected to.
Crear un Clúster de Atlas
Configure un nuevo proyecto Atlas con un clúster y todas las configuraciones de acceso necesarias en un único flujo de trabajo.
Create a new Atlas project named myNewProject and create a free cluster named myNewCluster in the new project, add access list for my current IP, create a database user named myNewUser with read and write access to the new cluster, and return the connection string for the new cluster
I'll create a new Atlas project, free cluster, and set up all the necessary access details for you. Ran ``atlas-create-project`` Great! I've created a new project named "myNewProject" in your MongoDB Atlas organization. Now let's list the projects to get the project ID for the new project: Ran ``atlas-list-projects`` Now I'll create a free cluster named "myNewCluster" in the new project: Ran ``atlas-create-free-cluster`` Now, I'll add your current IP address to the access list: Ran ``atlas-create-access-list`` Now let's create a database user named "myNewUser" with read and write access: Ran ``atlas-create-db-user`` Perfect! Everything has been set up successfully: **New Project Information:** Project Name: myNewProject Project ID: XXX **New Cluster Information:** Cluster Name: myNewCluster Region: US East (US_EAST_1) **Database User:** Username: myNewUser Password: XXX Access: Read and write to any database **Connection String:** mongodb+srv://myNewUser:XXX@myNewCluster.mongodb.net/ Your current IP address has been added to the access list, so you should be able to connect to the cluster from your current location. The cluster will take a few minutes to fully provision and be ready for connections.
Exploración de datos
Los siguientes ejemplos demuestran cómo puede utilizar el servidor MongoDB MCP para explorar y comprender sus datos MongoDB.
Lista de bases de datos disponibles
Descubra qué bases de datos están disponibles en su clúster y obtenga información sobre su contenido.
Show my Atlas databases
Ran ``list-databases`` Here are the databases in your Atlas cluster "TestCluster": **Sample Databases:** sample_mflix - 138.4 MB (Movie database with films, theaters, users, comments) sample_training - 53.6 MB (Training data for learning MongoDB) sample_airbnb - 55.3 MB (Airbnb listings and reviews data) sample_analytics - 10.0 MB (Analytics and customer data) sample_restaurants - 6.8 MB (Restaurant and review data) sample_weatherdata - 2.7 MB (Weather observation data) sample_geospatial - 1.3 MB (Geospatial data examples) sample_supplies - 1.1 MB (Supply chain data) sample_guides - 41 KB (Getting started guides) **System Databases:** local - 4.1 GB (MongoDB internal database) admin - 365 KB (Administrative database) Your cluster contains MongoDB's sample datasets, which are great for learning and testing. The largest databases are sample_mflix (movies), sample_training, and sample_airbnb. Would you like me to explore any specific database or show you the collections within one of them?
Explorar colecciones
Profundice en una base de datos específica para comprender su estructura y sus colecciones.
Show the collections in the Atlas sample_mflix database
Ran ``list-collections`` Here are the collections in the sample_mflix database: **Core Collections:** movies - Main collection containing movie data (titles, genres, cast, ratings, etc.) users - User profiles and account information comments - User comments and reviews on movies theaters - Movie theater locations and details sessions - User session data **Additional Collections:** embedded_movies - Movies collection with embedded/denormalized data structure The movies collection contains detailed information about films including plot, cast, directors, ratings, release dates, and genres. The comments collection links to movies and users to store user reviews and discussions. Would you like me to explore any specific collection in more detail, such as showing sample documents or collection statistics?
Análisis de datos
Analice sus datos de MongoDB para identificar patrones y tendencias.
Summarize the Atlas movies collection by thrillers
I'll help you analyze the Atlas movies collection focusing on thrillers. First, let me check what databases are available in your connected cluster. Ran ``list-databases`` Perfect! I can see you have the sample_mflix database which contains the movies collection. Let me check the collections in that database: Ran ``list-collections`` Great! Now let me analyze the movies collection to find thrillers and provide a summary. First, let me check the schema to understand how genres are stored: Ran ``collection-schema`` Perfect! I can see that genres are stored as an array of strings. Now let me get a count of thriller movies and then perform an aggregation to summarize them: Ran ``count`` Now let me create a comprehensive summary of thriller movies using aggregation: Ran ``aggregate`` I see there are some data quality issues. Let me get better data by filtering for movies with numeric ratings: Ran ``aggregate`` Now let me get some more summary statistics by decade: Ran ``aggregate`` Here's a comprehensive analysis of thriller movies in your Atlas sample_mflix collection: **Overall Statistics** Total Thriller Movies: 2,454 films Time Span: 1924 to 2011 Average Rating: 6.3/10 (IMDb) Average Runtime: 105 minutes Total IMDb Votes: 85.8 million across all thrillers **Top-Rated Thriller Movies** Drishyam (2013) - 8.9/10 - Indian thriller directed by Jeethu Joseph Tinker Tailor Soldier Spy (1979) - 8.7/10 - British spy thriller Rear Window (1954) - 8.6/10 - Alfred Hitchcock classic The Silence of the Lambs (1991) - 8.6/10 - Psychological thriller Se7en (1995) - 8.6/10 - David Fincher crime thriller The Usual Suspects (1995) - 8.6/10 - Neo-noir mystery Léon: The Professional (1994) - 8.6/10 - Action thriller Psycho (1960) - 8.6/10 - Another Hitchcock masterpiece Memento (2000) - 8.5/10 - Christopher Nolan psychological thriller **Trends by Decade** 1920s-1950s: Few thrillers but high quality (avg 7.2-7.8 rating) 1960s-1970s: Growing popularity, still maintaining quality 1980s: Significant growth (200 films), quality decline begins 1990s: Peak production era (417 films), avg 6.3 rating 2000s: Highest volume (854 films), avg 6.24 rating 2010s: 730 films, lowest average rating (6.12) The data shows that while thriller production increased dramatically from the 1980s onward, the average quality (as measured by IMDb ratings) has generally declined, suggesting the genre became more commercialized with higher volume but potentially lower artistic standards in recent decades.
Operaciones de bases de datos
Los siguientes ejemplos demuestran cómo utilizar el servidor MongoDB MCP para realizar operaciones de base de datos comunes.
Insertar nuevos documentos
Añade nuevos datos a tus colecciones.
Add a document to the movies_copy collection
I'll add a new document to the movies_copy collection. Let me create a sample movie document with the same structure as the existing movies: Ran ``insert-many`` Perfect! I've successfully added a new movie document to the movies_copy collection. Here's what I added: **New Movie Added:** Title: "The AI Thriller" Genres: Thriller, Sci-Fi, Drama Year: 2024 Runtime: 118 minutes IMDb Rating: 7.8/10 (125,000 votes) Cast: John Smith, Sarah Johnson, Michael Chen Director: Alex Rodriguez Rating: PG-13 The document was inserted with ID: 684b15fb5084ddef1f265e8f Let me verify the collection now has one additional document: Ran ``count`` Confirmed! The movies_copy collection now contains 21,350 documents (up from 21,349), including your new thriller movie about AI consciousness.
Crear nuevas colecciones
Crea y personaliza nuevas colecciones.
Create a new collection to store movie purchases data that includes geospatial and timeseries fields
I'll create a new collection for storing movie purchases data with geospatial and timeseries fields. Let me first create the collection and then add some sample documents with the appropriate structure. Ran ``create-collection`` Now I'll insert sample documents that include geospatial data (customer location, theater location) and timeseries data (purchase timestamp, viewing timestamp): Ran ``insert-many`` Perfect! I've created the movie_purchases collection with sample data that includes both geospatial and timeseries fields. Here's what the collection contains: **Collection Structure** **Geospatial Fields:** customer_location: GeoJSON Point with customer's coordinates theater_location: GeoJSON Point with theater coordinates customer_address: Human-readable address information **Timeseries Fields:** purchase_timestamp: When the purchase was made viewing_timestamp: When the movie was actually watched viewing_duration_minutes: How long the customer watched completion_percentage: Percentage of movie completed **Sample Data Added:** Purchase 1: Customer in San Francisco bought "The Shawshank Redemption" rental for $12.99 Purchase 2: Customer in New York purchased "Pulp Fiction" for $15.99 Purchase 3: Customer in Los Angeles rented "Se7en" for $9.99 Now let me create a geospatial index to optimize location-based queries: Ran ``create-index`` Excellent! The movie_purchases collection is now ready with: - Geospatial capabilities - 2dsphere index for location-based queries - Timeseries optimization - Index on purchase timestamps - Sample data - 3 purchase records across different US cities This collection can now support queries like: - Finding purchases within a certain radius of a location - Analyzing purchase patterns over time - Correlating viewing behavior with geographic regions - Time-based analytics and trending
Copiar una colección existente
Cree una copia de seguridad o un duplicado de una colección existente.
Make a copy of the movies collection named movies_copy
I'll create a copy of the movies collection named movies_copy. I'll use an aggregation pipeline with $out to copy all documents from the movies collection to a new movies_copy collection. Ran ``aggregate`` Perfect! I've successfully created a copy of the movies collection named movies_copy. Let me verify the copy was created correctly by checking the document count: Ran ``count`` Found 21349 documents in the collection "movies_copy" Excellent! The movies_copy collection has been successfully created with all 21,349 documents from the original movies collection. The copy operation completed successfully and you now have an identical backup of your movies collection.
Exportación y uso compartido de datos
El siguiente ejemplo demuestra cómo exportar resultados de consultas para compartirlos o procesarlos posteriormente en herramientas externas.
Generar informes completos y exportarlos para su uso en otras aplicaciones o para compartirlos con los miembros del equipo.
Los resultados exportados se guardan en un archivo en el equipo que ejecuta el servidor MCP. También puede acceder a los datos exportados a través de
exported-data recurso que utiliza la aplicación cliente de IA.
El archivo de exportación se almacena en la ruta del sistema de archivos especificada por la exportPath opción de configuración. El archivo de exportación se puede eliminar después del plazo especificado por la exportTimeout opción de configuración. Para obtener más información, consulte Exportar datos desde el servidor MongoDB MCP.
Summarize and export the contents of the movies collection.
I'll help you summarize and export the contents of the movies collection from your connected MongoDB Atlas cluster. Let's do this in two steps: 1. First, I'll create a summary of the movies collection using an aggregation pipeline. 2. Then I'll export the results using the export tool. Let's start by getting a summary of the movies collection: Ran ``aggregate`` Now that we have a summary of the movies collection, let's create a more focused dataset to export. Let's get the top 10 highest-rated movies: Ran ``aggregate`` Now, let's export this data using the MCP Server export tool: Ran ``export`` I've successfully summarized and exported the contents of the movies collection from your MongoDB Atlas cluster. Here's a breakdown of what I did: **Movies Collection Summary** **Collection Overview:** The collection contains 21,349 movie documents Time span: 1896 to 2016 (120 years of cinema history) Average IMDb rating: 6.66 out of 10 Average runtime: ~104 minutes **Top Genres:** Drama (12,385 movies) Comedy (6,532 movies) Romance (3,318 movies) Crime (2,457 movies) Thriller (2,454 movies) **Top Directors:** Woody Allen (40 movies) Martin Scorsese (32 movies) Takashi Miike (31 movies) Steven Spielberg (29 movies) John Ford (29 movies) **Top Languages:** English (14,639 movies) French (2,405 movies) Spanish (1,600 movies) German (1,377 movies) Italian (1,228 movies) **Highest Rated Movies:** Band of Brothers (2001) - 9.6/10 Planet Earth (2006) - 9.5/10 The Civil War (1990) - 9.4/10 A Brave Heart: The Lizzie Velasquez Story (2015) - 9.4/10 Cosmos (1980) - 9.3/10 **Export Details** The data has been exported to a JSON file in relaxed format. You can access the exported file at: /Users/XXX/.mongodb/mongodb-mcp/exports/68b8c10a6d0291128800be0b/sample_mflix.movies.68b8c2496d0291128800be0d.json The export includes: - Genre summary with counts and average ratings - Runtime statistics (min, max, average) - Year-by-decade summary with counts and ratings - Top directors with film counts - Language distribution - Top 10 highest-rated movies This export provides a comprehensive overview of the movies collection that can be easily shared or imported into other tools for further analysis or visualization.
Implementaciones locales de Atlas
Puede usar el servidor MCP con implementaciones locales de Atlas. Para usar las herramientas del servidor MCP con implementaciones locales de Atlas, debe instalar Docker. Para obtener una introducción a las implementaciones locales de Atlas, consulte Crear una implementación local de Atlas.
Los siguientes ejemplos muestran cómo utilizar el servidor MongoDB MCP para interactuar con implementaciones locales de Atlas y crearlas.
Lista de implementaciones locales de Atlas
El siguiente ejemplo enumera las implementaciones locales de Atlas.
List all local Atlas deployments.
I'll list all the local Atlas deployments for you. Ran ``atlas-local-list-deployments`` Here are all your local Atlas deployments: Found 1 deployment: Deployment Name State MongoDB Version local7356 Running 8.2.1 You currently have one Atlas Local deployment named local7356 that is running MongoDB version 8.2.1.
Conectarse a la implementación local de Atlas y a las bases de datos de listas
El siguiente ejemplo se conecta a una implementación de Atlas local denominada local7356 y enumera las bases de datos.
Connect to my local Atlas deployment local7356 and list the databases.
I'll connect to your local Atlas deployment local7356 and then list the databases. Ran ``atlas-local-connect-deployment`` Now I'll list the databases in this deployment: Ran ``list-databases`` Successfully connected to Atlas Local deployment local7356! Here are the databases in this deployment: Found 4 databases: admin - 262,144 bytes (256 KB) config - 237,568 bytes (~232 KB) local - 1,163,264 bytes (~1.1 MB) testdb - 438,272 bytes (~428 KB) Your local Atlas deployment contains: System databases: admin, config, and local (standard MongoDB system databases) User database: testdb (your custom database) The local database has grown to about 1.1 MB, which is normal for local operations and metadata storage. Would you like me to explore any of these databases further, such as showing the collections in testdb?
Crear una nueva implementación de Atlas local con base de datos y colección
El siguiente ejemplo crea una nueva implementación local de Atlas con una base de datos y una colección de ejemplo. Para ejecutar el ejemplo, debe deshabilitar el modo de solo lectura.
Create a new local Atlas deployment, connect to it, create a collection with sample pizza orders data, and then retrieve the sample data.
I'll help you create a new local Atlas deployment, connect to it, create a collection with sample pizza orders data, and then retrieve it. Let me break this down into steps: Created 4 todos Now let me start with the first task: Starting (1/4) Create new local Atlas deployment Ran ``atlas-local-create-deployment`` Great! Created deployment local5528. Now let me mark that as completed and connect to it: Starting (2/4) Connect to new deployment Ran ``atlas-local-connect-deployment`` Perfect! Now let me create the pizza orders collection with sample data: Starting (3/4) Create pizza orders collection Ran ``create-collection`` Now let me insert sample pizza orders data: Great! Now let me retrieve and display the sample data: Starting (4/4) Retrieve and display sample data Ran ``find`` Completed (4/4) Retrieve and display sample data Perfect! I've successfully completed all the tasks. Here's a summary of what was accomplished: Task Completed Successfully 1. Created New Local Atlas Deployment Deployment Name: local5528 Container ID: XXX Status: Running 2. Connected to the New Deployment Successfully connected to deployment local5528 3. Created Pizza Orders Collection Database: pizzarestaurant Collection: orders Documents Inserted: 5 pizza orders 4. Sample Pizza Orders Data Created The collection now contains 5 realistic pizza orders with the following details: Order ORD-001 - John Smith (New York) Margherita Pizza (Large) + Pepperoni Pizza (Medium) Total: $34.98 | Status: Completed | Delivery Order ORD-002 - Sarah Johnson (Brooklyn) 2x Hawaiian Pizza (Large) + Caesar Salad Total: $50.97 | Status: Preparing | Pickup Order ORD-003 - Mike Chen (Manhattan) Meat Lovers Pizza (Extra Large) + 2x Garlic Bread + 3x Soda Total: $40.95 | Status: Delivered | Delivery Order ORD-004 - Emily Davis (Queens) Veggie Supreme Pizza (Medium) Total: $17.99 | Status: Ready | Pickup Order ORD-005 - David Wilson (Staten Island) BBQ Chicken Pizza (Large) + Buffalo Wings Total: $35.98 | Status: Out for Delivery | Delivery Data Structure Features: Customer Information: Name, phone, email, complete address Order Details: Order ID, date, delivery type, payment method, status Items: Pizza types, sizes, quantities, prices, detailed toppings Financial Data: Individual item prices and total amounts Your new local Atlas deployment is now ready with a fully functional pizza restaurant orders system!
Eliminar la implementación local de Atlas
El siguiente ejemplo elimina una implementación local de Atlas local5528 llamada. Para ejecutar el ejemplo, debe deshabilitar el modo de solo lectura.
Delete my local Atlas deployment local5528.
I'll delete your local Atlas deployment local5528 for you. Ran ``atlas-local-delete-deployment``
Después de que la herramienta atlas-local-delete-deployment elimine la implementación local de Atlas, el cliente de IA podría responder que la implementación ya se eliminó. Esto se debe a que la herramienta atlas-local-delete-deployment se ejecutó correctamente y la implementación local de Atlas ya no existe.
Optimización del rendimiento
Los siguientes ejemplos demuestran cómo se puede utilizar el MongoDB MCP servidor para identificar y resolver problemas de rendimiento. El resultado real para tus indicaciones dependerá de si tu clúster está experimentando problemas de rendimiento, por lo que tus resultados podrían no coincidir exactamente con los ejemplos proporcionados.
Estos ejemplos utilizan la herramienta atlas-get-performance-advisor para recuperar recomendaciones del Asesor de rendimiento.
Analizar consultas lentas
Identifique y analice consultas de rendimiento lento para comprender los cuellos de botella en el rendimiento.
Nota
Al analizar consultas lentas, el servidor MongoDB MCP recupera una muestra de consultas lentas, con un límite de 50 consultas. Esta muestra incluye hasta 50 consultas lentas recientes que cumplen las condiciones especificadas en la solicitud para garantizar un rendimiento y tiempos de respuesta óptimos.
Consulta para todas las consultas lentas en el clúster:
Show me slow queries in my cluster
Ran ``atlas-get-performance-advisor`` I found extensive slow query logs for ClusterM10. Here's a summary of the slow queries detected: **Query Types and Patterns** **Most Common Slow Queries:** **Collection Statistics Queries ($collStats)** - **Collections**: SchemaAdvisor.Call to Action: Remove unnecessary indexes and SchemaAdvisor.Call to Action: Remove unnecessary indexes 2 - **Duration**: 30-100+ milliseconds - **Pattern**: These are MongoDB Automation Agent queries collecting storage statistics - **Frequency**: Very frequent (every 10 minutes) **Collection Statistics for mcp_db.new** - **Duration**: 30-60+ milliseconds - **Pattern**: Similar $collStats operations **Find Query on sample_db.test** - **Query**: {"find": "dbC", "filter": {"z": 10}} - **Duration**: 5 milliseconds - **Plan**: COLLSCAN (Collection Scan) - **Performance Issue**: Examined 10,001 documents but only returned 1 - This matches the index suggestion we saw earlier! **Key Performance Issues** **Inefficient Query on sample_db.test:** - The find query {"z": 10} is doing a full collection scan - This directly relates to the index suggestion we found: creating an index on field z would significantly improve this query - Currently examining 10,001 documents to find 1 result **Automation Agent Overhead:** - Many slow queries are from MongoDB Automation Agent collecting statistics - These are system-level operations that run regularly - Duration typically 30-100ms, which is acceptable for system operations **Recommendations** **Implement the Suggested Index:** - Create an index on {"z": 1} for the sample_db.test collection - This will eliminate the collection scan and dramatically improve the find query performance **Monitor Automation Agent Performance:** - The frequent $collStats operations are normal but worth monitoring - Consider if the collection statistics collection frequency can be optimized The most actionable finding is the inefficient query on sample_db.test that would benefit from the index suggestion we identified earlier.
query para problemas generales de rendimiento:
How is my cluster performance?
Ran ``atlas-get-performance-advisor`` Based on the performance advisor data I retrieved for ClusterM10, here's an assessment of your cluster performance: **Overall Performance Assessment: Good with Room for Optimization** **✅ Positive Indicators** - **Query Response Times**: Most queries are completing in reasonable timeframes (30-100ms) - **No Critical Issues**: No extremely slow queries (>1000ms) detected - **Stable Operations**: Consistent performance patterns over time - **System Health**: MongoDB Automation Agent operations are functioning normally **⚠️ Areas for Improvement** **Inefficient Query Pattern:** - **Issue**: Find query {"z": 10} on sample_db.test collection - **Current**: Full collection scan examining 10,001 documents - **Impact**: Unnecessary resource consumption - **Solution**: Implement suggested index on field z **Resource Utilization:** - **CPU Usage**: Queries consuming 5-45+ CPU nanoseconds - **Memory**: Response sizes up to 602KB for statistics queries - **I/O**: Some queries showing storage read operations **📊 Performance Metrics Summary** - **Slow Query Count**: 50+ entries in recent logs - **Average Query Duration**: 30-100ms (acceptable range) - **Most Common Operations**: Collection statistics gathering - **Index Optimization Opportunity**: 1 significant improvement identified **🎯 Immediate Recommendations** **High Priority**: Create index {"z": 1} on sample_db.test collection - **Expected Impact**: Eliminate collection scan, reduce query time from 5ms to <1ms - **Resource Savings**: Reduce CPU and I/O usage significantly **Medium Priority**: Monitor automation agent performance - Current 30-100ms for $collStats operations is acceptable - Consider optimizing if frequency becomes problematic **📈 Performance Grade: B+** Your cluster is performing well overall, but implementing the suggested index would elevate it to an A-grade performance level. The main bottleneck is the inefficient query pattern that can be easily resolved with proper indexing. Would you like me to help you implement the suggested index or dive deeper into any specific performance metrics?
Consulta de un tipo de operación, duración o espacio de nombres específico:
Show me slow writes for the past 24 hours in the movies collection
Ran ``atlas-get-performance-advisor`` Based on the slow query logs from the sample_db.test collection, I can see that there are no slow write operations in this collection over the past 24 hours. **Analysis Summary** **Slow Query Logs Found in sample_db.test:** - **1 slow query detected** - this is a READ operation (find query) - **No write operations found** in the slow query logs - Query occurred on 2025-09-30 at 14:52:56 UTC **Detailed Breakdown of the Slow Query** **Query Details:** .. code-block:: javascript db.dbC.find({ "z": 10 }) **Performance Characteristics:** - **Operation Type**: find (READ operation) - **Collection Scan**: ``"planSummary": "COLLSCAN"`` - **Documents Examined**: 10,001 documents - **Keys Examined**: 0 (no index usage) - **Duration**: 6 milliseconds - **Results Returned**: 1 document - **Planning Time**: 1.78ms - **Mirrored Query**: Yes (read preference: secondaryPreferred) **Performance Issues:** - **Missing Index**: The query on field ``z`` is performing a collection scan - **Inefficient**: Examining 10,001 documents to find 1 result - **Read Preference**: Using secondaryPreferred (mirrored query) **Key Findings** ✅ **Write Performance**: No slow write operations detected ⚠️ **Read Performance**: One slow read operation due to missing index
Consulta de un tiempo de ejecución específico:
Show me queries longer than 5 seconds
Ran ``atlas-get-performance-advisor`` Based on the slow query logs from your MongoDB Atlas cluster, I can see that there are no queries longer than 5 seconds in your cluster performance data. **Analysis Summary** **Query Duration Analysis:** - All slow queries are under 5 seconds - the longest queries are around 117ms (0.117 seconds) - Most queries are between 30-100ms duration - No queries exceed 5 seconds (5000ms) **Detailed Breakdown of Slow Queries** The slow query logs show primarily MongoDB Automation Agent queries with the following characteristics: **Query Types Found:** **SchemaAdvisor Collections** - MongoDB's automated performance monitoring queries - **Duration**: 30-117ms - **Operations**: ``$collStats`` aggregation queries - **Purpose**: Automated performance analysis **smol.woofer Collection** - User application queries - **Duration**: 33ms - **Operations**: Aggregation with ``$match`` and ``$lookup`` - **Issue**: Collection scan (no index usage) **mcp_db.new Collection** - Storage statistics queries - **Duration**: 33-69ms - **Operations**: ``$collStats`` aggregation **sample_db.test Collection** - Find queries - **Duration**: 6ms - **Operations**: find with collection scan **Performance Status**: ✅ Good Your cluster is performing well with: - No queries longer than 5 seconds - All slow queries are under 200ms - Most queries are under 100ms The slow queries are primarily from MongoDB's automated monitoring systems, which is normal and expected behavior.
Obtener recomendaciones de índice
Utilice el Asesor de rendimiento para sugerir índices para crear o eliminar para mejorar el rendimiento.
What indexes should I create or drop to improve performance?
Ran ``atlas-get-performance-advisor`` Based on the performance advisor analysis of your MongoDB Atlas cluster "ClusterM10", here are the specific index recommendations: **Index to CREATE:** **High Priority - Create Index on sample_db.test collection:** - **Index**: { "z": 1 } - **Weight**: 1,020,000 bytes (≈ 1.02 MB saved per query) - **Impact**: This index will significantly improve performance for queries filtering on the z field - **Reason**: The slow query logs show a query {"z": 10} that performed a collection scan (COLLSCAN) and examined 10,001 documents to return just 1 result **Indexes to DROP:** **No Drop Recommendations Found** The performance advisor found no indexes that should be dropped at this time. **Additional Performance Insights:** **Schema Recommendations:** - **Remove Unnecessary Indexes**: You have collections with more than 30 indexes, which can impact write performance - **Reduce Document Size**: Some documents exceed 2MB, which can cause performance issues - **Avoid Unbounded Arrays**: Some collections have arrays with over 10,000 entries **Slow Query Analysis:** The slow query logs show that most slow queries are from MongoDB's internal automation agent running $collStats operations, which is normal. However, there's one user query that would benefit significantly from the suggested index: - **Query**: db.dbC.find({"z": 10}) - **Current Performance**: Collection scan examining 10,001 documents - **Expected Improvement**: With the suggested index, this should become an index scan examining only the matching documents **Recommended Action:** Create the index { "z": 1 } on the sample_db.test collection. This single index creation could save approximately 1MB of disk reads per query execution for queries filtering on the z field.
Optimice su esquema
Obtenga recomendaciones para mejorar el diseño y la estructura del esquema de su base de datos.
Suggest schema improvements for my database
Ran ``atlas-get-performance-advisor`` Based on the performance advisor analysis of your MongoDB Atlas cluster, here are the specific schema improvements recommended: **Critical Schema Issues Found:** **1. Remove Unnecessary Indexes** **Collections Affected:** - SchemaAdvisor.Call to Action: Remove unnecessary indexes - SchemaAdvisor.Call to Action: Remove unnecessary indexes 2 **Issue**: Both collections have more than 30 indexes each **Impact:** - Slower write operations - Increased storage overhead - Longer index maintenance time - Reduced memory efficiency **Recommendation:** - Audit all indexes in these collections - Remove indexes that are not actively used by your queries - Keep only indexes that support your application's query patterns - Consider compound indexes instead of multiple single-field indexes **2. Reduce Document Size** **Collections Affected:** - SchemaAdvisor.Call to Action: Reduce the size of documents - SchemaAdvisor.Call to Action: Reduce the size of documents 2 **Issue**: Documents larger than 2MB found **Impact:** - Documents cannot be stored in MongoDB (16MB limit) - Performance degradation for large documents - Increased memory usage - Slower network transfer **Recommendations:** - Split large documents into smaller, related documents - Use GridFS for storing large binary data - Normalize data by moving large arrays or objects to separate collections - Archive old data to reduce document size - Compress data where appropriate **3. Avoid Unbounded Arrays** **Collections Affected:** - SchemaAdvisor.Call to Action: Avoid using unbounded arrays in documents **Issue**: Arrays with over 10,000 entries detected **Impact:** - Poor query performance on array elements - Memory consumption issues - Index inefficiency - Difficult to maintain and update **Recommendations:** - Move large arrays to separate collections with references - Implement pagination for array access - Use embedded documents with limits (e.g., max 100 items) - Consider denormalization for frequently accessed array elements
Búsqueda vectorial
Importante
La compatibilidad con la búsqueda vectorial en MCP está disponible como función de vista previa. Para habilitarla, configure el previewFeatures indicador o la MDB_MCP_PREVIEW_FEATURES variable de entorno como search en la configuración de MCP.
Los siguientes ejemplos demuestran cómo utilizar el servidor MCP de MongoDB para trabajar con MongoDB búsqueda vectorial. Si deseas aprender más sobre cómo funciona el Servidor MCP con la búsqueda vectorial, consulta Soporte para Búsqueda Vectorial.
Nota
Para utilizar la generación automática de incrustaciones, debe configurar el servidor MCP con una clave API de Voyage AI.
Crear y administrar índices de búsqueda vectorial
Cree, elimine y enumere índices de búsqueda vectorial para sus colecciones.
Create a vector search index on the sample_db.products collection for the embeddings field with 1024 dimensions using dot product similarity
I'll help you create a vector search index on the MongoDB collection. Let me first check the available MongoDB tools and then create the index. Now I'll create the vector search index: Ran ``create-index`` Vector search index created successfully! The index embeddings_vector_index has been created on the sample_db.products collection for the embeddings field with: 1024 dimensions Dot product similarity Note that vector search indexes can take some time to build. You can check the index status using the collection-indexes tool.
Show me all vector search indexes on the sample_db.products collection
Ran ``collection-indexes`` The sample_db.products collection has 1 vector search index: embeddings_vector_index Type: vectorSearch Status: READY ✓ Queryable: Yes Field: embeddings Dimensions: 2048 Similarity: dotProduct Quantization: none The index is ready and can be used for vector search queries.
Drop the vector search index from the sample_db.products collection
First, let me check what indexes exist on this collection to identify the vector search index. I can see there's a vector search index named embeddings_vector_index on the collection. Now I need to activate the index management tools to drop it. Ran ``drop-index`` Done! The vector search index embeddings_vector_index has been successfully dropped from the sample_db.products collection.
Insertar documentos con incrustaciones automáticas
Al configurar una clave API de Voyage AI, el servidor MCP genera automáticamente incrustaciones al insertar documentos. Puede proporcionar valores de texto sin formato y el servidor genera las incrustaciones para los campos con índices de búsqueda vectorial.
Insert documents into the sample_db database and products collection. Use the voyage-3-large embedding model to generate vector embeddings from descriptions for the `embeddings` field. 1. name: "Headphones", description: "Premium wireless noise-canceling headphones with 30-hour battery" 2. name: "Earbuds", description: "True wireless earbuds with active noise cancellation" 3. name: "Monitor", description: "24-inch 4K monitor with 1ms response time" 4. name: "Keyboard", description: "Mechanical keyboard with customizable RGB lighting" 5. name: "Mouse", description: "Wireless ergonomic mouse with programmable buttons"
I'll insert these documents with embeddings for their descriptions. Let me activate the necessary tools and generate embeddings. Ran ``insert-many`` Successfully inserted 5 documents into sample_db.products: Headphones - Premium wireless noise-canceling headphones with 30-hour battery Earbuds - True wireless earbuds with active noise cancellation Monitor - 24-inch 4K monitor with 1ms response time Keyboard - Mechanical keyboard with customizable RGB lighting Mouse - Wireless ergonomic mouse with programmable buttons The documents have been inserted with their descriptions embedded in the embeddings field. When you perform vector searches using the $vectorSearch aggregation stage, the MCP server will automatically generate the proper vector embeddings from the text for comparison.
Ejecutar una consulta de búsqueda vectorial
Al configurar una clave API de Voyage AI, el servidor MCP genera automáticamente incrustaciones al ejecutar consultas de búsqueda vectorial. Puede proporcionar el texto sin procesar y el servidor generará automáticamente incrustaciones para la consulta.
Run a vector search query on sample_db.products on the 'embeddings' field using the vector search index to find products that I can use to listen to music.
I'll search for products related to listening to music using vector search. Ran ``aggregate`` Products for Listening to Music 1. Earbuds (Score: 0.77) True wireless earbuds with active noise cancellation 2. Headphones (Score: 0.77) Premium wireless noise-canceling headphones with 30-hour battery These are the two most relevant products for listening to music in your database. Both scored highly on the semantic search and are specifically designed for audio listening, with the earbuds and headphones being the clear top matches.
Próximos pasos
Para obtener más información sobre la configuración y el uso del servidor MongoDB MCP:
Conozca todas las opciones de configuración y métodos de conexión disponibles. | |
Explora la lista completa de herramientas y sus capacidades. | |
Comprenda cómo implementar y utilizar de forma segura el servidor MCP. |