本页通过带有自然语言提示的实际示例演示了如何使用MongoDB MCP 服务器。每个示例都显示了您可以在AI客户端中输入的提示,以及您可能从大语言模型 (LLM) 收到的示例响应。
您可以使用此页面上的示例作为起点,与MongoDB MCP Server 进行交互。
注意
您收到的确切输出将根据您的数据、 AI客户端和您使用的法学硕士而有所不同。在这些示例中,组织ID、项目ID 和密码等私有信息都经过编辑。
先决条件
在运行这些示例之前,请确保您具备以下条件:
访问包含待探索数据的MongoDB 部署。
Atlas Administration
以下示例演示了如何使用MongoDB MCP服务器管理Atlas基础架构。
查看组织和项目
概述您的Atlas帐户结构和可用资源。
Show my Atlas organizations and projects
Ran ``atlas-list-orgs`` Ran ``atlas-list-projects`` Here are your MongoDB Atlas organizations and projects: **Organizations:** TestOrganization Organization ID: XXX **Projects:** TestProject Project ID: XXX Organization: TestOrganization Created: August 7, 2020 at 11:48:55 AM You have one organization containing one project. This project contains your TestCluster that we're currently connected to.
创建 Atlas 集群
在单个工作流程中设置一个具有集群和所有必要访问权限配置的新Atlas项目。
Create a new Atlas project named myNewProject and create a free cluster named myNewCluster in the new project, add access list for my current IP, create a database user named myNewUser with read and write access to the new cluster, and return the connection string for the new cluster
I'll create a new Atlas project, free cluster, and set up all the necessary access details for you. Ran ``atlas-create-project`` Great! I've created a new project named "myNewProject" in your MongoDB Atlas organization. Now let's list the projects to get the project ID for the new project: Ran ``atlas-list-projects`` Now I'll create a free cluster named "myNewCluster" in the new project: Ran ``atlas-create-free-cluster`` Now, I'll add your current IP address to the access list: Ran ``atlas-create-access-list`` Now let's create a database user named "myNewUser" with read and write access: Ran ``atlas-create-db-user`` Perfect! Everything has been set up successfully: **New Project Information:** Project Name: myNewProject Project ID: XXX **New Cluster Information:** Cluster Name: myNewCluster Region: US East (US_EAST_1) **Database User:** Username: myNewUser Password: XXX Access: Read and write to any database **Connection String:** mongodb+srv://myNewUser:XXX@myNewCluster.mongodb.net/ Your current IP address has been added to the access list, so you should be able to connect to the cluster from your current location. The cluster will take a few minutes to fully provision and be ready for connections.
数据探索
以下示例演示如何使用MongoDB MCP 服务器来探索和了解MongoDB数据。
列出可用数据库
发现集群中有哪些数据库可用,并深入了解其内容。
Show my Atlas databases
Ran ``list-databases`` Here are the databases in your Atlas cluster "TestCluster": **Sample Databases:** sample_mflix - 138.4 MB (Movie database with films, theaters, users, comments) sample_training - 53.6 MB (Training data for learning MongoDB) sample_airbnb - 55.3 MB (Airbnb listings and reviews data) sample_analytics - 10.0 MB (Analytics and customer data) sample_restaurants - 6.8 MB (Restaurant and review data) sample_weatherdata - 2.7 MB (Weather observation data) sample_geospatial - 1.3 MB (Geospatial data examples) sample_supplies - 1.1 MB (Supply chain data) sample_guides - 41 KB (Getting started guides) **System Databases:** local - 4.1 GB (MongoDB internal database) admin - 365 KB (Administrative database) Your cluster contains MongoDB's sample datasets, which are great for learning and testing. The largest databases are sample_mflix (movies), sample_training, and sample_airbnb. Would you like me to explore any specific database or show you the collections within one of them?
探索集合
深入研究特定数据库,了解其结构和集合。
Show the collections in the Atlas sample_mflix database
Ran ``list-collections`` Here are the collections in the sample_mflix database: **Core Collections:** movies - Main collection containing movie data (titles, genres, cast, ratings, etc.) users - User profiles and account information comments - User comments and reviews on movies theaters - Movie theater locations and details sessions - User session data **Additional Collections:** embedded_movies - Movies collection with embedded/denormalized data structure The movies collection contains detailed information about films including plot, cast, directors, ratings, release dates, and genres. The comments collection links to movies and users to store user reviews and discussions. Would you like me to explore any specific collection in more detail, such as showing sample documents or collection statistics?
数据分析
分析MongoDB数据以确定模式和趋势。
Summarize the Atlas movies collection by thrillers
I'll help you analyze the Atlas movies collection focusing on thrillers. First, let me check what databases are available in your connected cluster. Ran ``list-databases`` Perfect! I can see you have the sample_mflix database which contains the movies collection. Let me check the collections in that database: Ran ``list-collections`` Great! Now let me analyze the movies collection to find thrillers and provide a summary. First, let me check the schema to understand how genres are stored: Ran ``collection-schema`` Perfect! I can see that genres are stored as an array of strings. Now let me get a count of thriller movies and then perform an aggregation to summarize them: Ran ``count`` Now let me create a comprehensive summary of thriller movies using aggregation: Ran ``aggregate`` I see there are some data quality issues. Let me get better data by filtering for movies with numeric ratings: Ran ``aggregate`` Now let me get some more summary statistics by decade: Ran ``aggregate`` Here's a comprehensive analysis of thriller movies in your Atlas sample_mflix collection: **Overall Statistics** Total Thriller Movies: 2,454 films Time Span: 1924 to 2011 Average Rating: 6.3/10 (IMDb) Average Runtime: 105 minutes Total IMDb Votes: 85.8 million across all thrillers **Top-Rated Thriller Movies** Drishyam (2013) - 8.9/10 - Indian thriller directed by Jeethu Joseph Tinker Tailor Soldier Spy (1979) - 8.7/10 - British spy thriller Rear Window (1954) - 8.6/10 - Alfred Hitchcock classic The Silence of the Lambs (1991) - 8.6/10 - Psychological thriller Se7en (1995) - 8.6/10 - David Fincher crime thriller The Usual Suspects (1995) - 8.6/10 - Neo-noir mystery Léon: The Professional (1994) - 8.6/10 - Action thriller Psycho (1960) - 8.6/10 - Another Hitchcock masterpiece Memento (2000) - 8.5/10 - Christopher Nolan psychological thriller **Trends by Decade** 1920s-1950s: Few thrillers but high quality (avg 7.2-7.8 rating) 1960s-1970s: Growing popularity, still maintaining quality 1980s: Significant growth (200 films), quality decline begins 1990s: Peak production era (417 films), avg 6.3 rating 2000s: Highest volume (854 films), avg 6.24 rating 2010s: 730 films, lowest average rating (6.12) The data shows that while thriller production increased dramatically from the 1980s onward, the average quality (as measured by IMDb ratings) has generally declined, suggesting the genre became more commercialized with higher volume but potentially lower artistic standards in recent decades.
数据库操作
以下示例演示如何使用MongoDB MCP 服务器执行常见数据库操作。
插入新文档
向集合添加新数据。
Add a document to the movies_copy collection
I'll add a new document to the movies_copy collection. Let me create a sample movie document with the same structure as the existing movies: Ran ``insert-many`` Perfect! I've successfully added a new movie document to the movies_copy collection. Here's what I added: **New Movie Added:** Title: "The AI Thriller" Genres: Thriller, Sci-Fi, Drama Year: 2024 Runtime: 118 minutes IMDb Rating: 7.8/10 (125,000 votes) Cast: John Smith, Sarah Johnson, Michael Chen Director: Alex Rodriguez Rating: PG-13 The document was inserted with ID: 684b15fb5084ddef1f265e8f Let me verify the collection now has one additional document: Ran ``count`` Confirmed! The movies_copy collection now contains 21,350 documents (up from 21,349), including your new thriller movie about AI consciousness.
创建新集合
创建和自定义新集合。
Create a new collection to store movie purchases data that includes geospatial and timeseries fields
I'll create a new collection for storing movie purchases data with geospatial and timeseries fields. Let me first create the collection and then add some sample documents with the appropriate structure. Ran ``create-collection`` Now I'll insert sample documents that include geospatial data (customer location, theater location) and timeseries data (purchase timestamp, viewing timestamp): Ran ``insert-many`` Perfect! I've created the movie_purchases collection with sample data that includes both geospatial and timeseries fields. Here's what the collection contains: **Collection Structure** **Geospatial Fields:** customer_location: GeoJSON Point with customer's coordinates theater_location: GeoJSON Point with theater coordinates customer_address: Human-readable address information **Timeseries Fields:** purchase_timestamp: When the purchase was made viewing_timestamp: When the movie was actually watched viewing_duration_minutes: How long the customer watched completion_percentage: Percentage of movie completed **Sample Data Added:** Purchase 1: Customer in San Francisco bought "The Shawshank Redemption" rental for $12.99 Purchase 2: Customer in New York purchased "Pulp Fiction" for $15.99 Purchase 3: Customer in Los Angeles rented "Se7en" for $9.99 Now let me create a geospatial index to optimize location-based queries: Ran ``create-index`` Excellent! The movie_purchases collection is now ready with: - Geospatial capabilities - 2dsphere index for location-based queries - Timeseries optimization - Index on purchase timestamps - Sample data - 3 purchase records across different US cities This collection can now support queries like: - Finding purchases within a certain radius of a location - Analyzing purchase patterns over time - Correlating viewing behavior with geographic regions - Time-based analytics and trending
复制现有集合
创建现有集合的备份或副本。
Make a copy of the movies collection named movies_copy
I'll create a copy of the movies collection named movies_copy. I'll use an aggregation pipeline with $out to copy all documents from the movies collection to a new movies_copy collection. Ran ``aggregate`` Perfect! I've successfully created a copy of the movies collection named movies_copy. Let me verify the copy was created correctly by checking the document count: Ran ``count`` Found 21349 documents in the collection "movies_copy" Excellent! The movies_copy collection has been successfully created with all 21,349 documents from the original movies collection. The copy operation completed successfully and you now have an identical backup of your movies collection.
数据导出和共享
以下示例演示了如何导出查询结果以在外部工具中共享或进一步处理。
生成综合报告并将其导出以用于其他应用程序或与团队成员共享。
导出的结果将保存到运行 MCP 服务器的计算机上的文件中。您还可以使用AI客户端应用程序通过 exported-data资源访问权限导出的数据。
导出文件存储在 exportPath 配置选项指定的文件系统路径中。在 exportTimeout 配置选项指定的时间段后,可以删除导出文件。有关更多详细信息,请参阅从MongoDB MCP 服务器导出数据。
Summarize and export the contents of the movies collection.
I'll help you summarize and export the contents of the movies collection from your connected MongoDB Atlas cluster. Let's do this in two steps: 1. First, I'll create a summary of the movies collection using an aggregation pipeline. 2. Then I'll export the results using the export tool. Let's start by getting a summary of the movies collection: Ran ``aggregate`` Now that we have a summary of the movies collection, let's create a more focused dataset to export. Let's get the top 10 highest-rated movies: Ran ``aggregate`` Now, let's export this data using the MCP Server export tool: Ran ``export`` I've successfully summarized and exported the contents of the movies collection from your MongoDB Atlas cluster. Here's a breakdown of what I did: **Movies Collection Summary** **Collection Overview:** The collection contains 21,349 movie documents Time span: 1896 to 2016 (120 years of cinema history) Average IMDb rating: 6.66 out of 10 Average runtime: ~104 minutes **Top Genres:** Drama (12,385 movies) Comedy (6,532 movies) Romance (3,318 movies) Crime (2,457 movies) Thriller (2,454 movies) **Top Directors:** Woody Allen (40 movies) Martin Scorsese (32 movies) Takashi Miike (31 movies) Steven Spielberg (29 movies) John Ford (29 movies) **Top Languages:** English (14,639 movies) French (2,405 movies) Spanish (1,600 movies) German (1,377 movies) Italian (1,228 movies) **Highest Rated Movies:** Band of Brothers (2001) - 9.6/10 Planet Earth (2006) - 9.5/10 The Civil War (1990) - 9.4/10 A Brave Heart: The Lizzie Velasquez Story (2015) - 9.4/10 Cosmos (1980) - 9.3/10 **Export Details** The data has been exported to a JSON file in relaxed format. You can access the exported file at: /Users/XXX/.mongodb/mongodb-mcp/exports/68b8c10a6d0291128800be0b/sample_mflix.movies.68b8c2496d0291128800be0d.json The export includes: - Genre summary with counts and average ratings - Runtime statistics (min, max, average) - Year-by-decade summary with counts and ratings - Top directors with film counts - Language distribution - Top 10 highest-rated movies This export provides a comprehensive overview of the movies collection that can be easily shared or imported into other tools for further analysis or visualization.
本地Atlas部署
以下示例展示了如何使用MongoDB MCP Server 与本地 Atlas 部署交互并创建本地Atlas部署。
列出本地Atlas部署
以下示例列出了本地Atlas部署。
List all local Atlas deployments.
I'll list all the local Atlas deployments for you. Ran ``atlas-local-list-deployments`` Here are all your local Atlas deployments: Found 1 deployment: Deployment Name State MongoDB Version local7356 Running 8.2.1 You currently have one Atlas Local deployment named local7356 that is running MongoDB version 8.2.1.
连接到本地Atlas部署和列表数据库
以下示例连接到名为 local7356 的本地Atlas部署并列出数据库。
Connect to my local Atlas deployment local7356 and list the databases.
I'll connect to your local Atlas deployment local7356 and then list the databases. Ran ``atlas-local-connect-deployment`` Now I'll list the databases in this deployment: Ran ``list-databases`` Successfully connected to Atlas Local deployment local7356! Here are the databases in this deployment: Found 4 databases: admin - 262,144 bytes (256 KB) config - 237,568 bytes (~232 KB) local - 1,163,264 bytes (~1.1 MB) testdb - 438,272 bytes (~428 KB) Your local Atlas deployment contains: System databases: admin, config, and local (standard MongoDB system databases) User database: testdb (your custom database) The local database has grown to about 1.1 MB, which is normal for local operations and metadata storage. Would you like me to explore any of these databases further, such as showing the collections in testdb?
使用数据库和集合创建新的本地Atlas部署
以下示例将创建一个新的本地Atlas部署,其中包含数据库和示例集合。要运行该示例,必须禁用只读模式。
Create a new local Atlas deployment, connect to it, create a collection with sample pizza orders data, and then retrieve the sample data.
I'll help you create a new local Atlas deployment, connect to it, create a collection with sample pizza orders data, and then retrieve it. Let me break this down into steps: Created 4 todos Now let me start with the first task: Starting (1/4) Create new local Atlas deployment Ran ``atlas-local-create-deployment`` Great! Created deployment local5528. Now let me mark that as completed and connect to it: Starting (2/4) Connect to new deployment Ran ``atlas-local-connect-deployment`` Perfect! Now let me create the pizza orders collection with sample data: Starting (3/4) Create pizza orders collection Ran ``create-collection`` Now let me insert sample pizza orders data: Great! Now let me retrieve and display the sample data: Starting (4/4) Retrieve and display sample data Ran ``find`` Completed (4/4) Retrieve and display sample data Perfect! I've successfully completed all the tasks. Here's a summary of what was accomplished: Task Completed Successfully 1. Created New Local Atlas Deployment Deployment Name: local5528 Container ID: XXX Status: Running 2. Connected to the New Deployment Successfully connected to deployment local5528 3. Created Pizza Orders Collection Database: pizzarestaurant Collection: orders Documents Inserted: 5 pizza orders 4. Sample Pizza Orders Data Created The collection now contains 5 realistic pizza orders with the following details: Order ORD-001 - John Smith (New York) Margherita Pizza (Large) + Pepperoni Pizza (Medium) Total: $34.98 | Status: Completed | Delivery Order ORD-002 - Sarah Johnson (Brooklyn) 2x Hawaiian Pizza (Large) + Caesar Salad Total: $50.97 | Status: Preparing | Pickup Order ORD-003 - Mike Chen (Manhattan) Meat Lovers Pizza (Extra Large) + 2x Garlic Bread + 3x Soda Total: $40.95 | Status: Delivered | Delivery Order ORD-004 - Emily Davis (Queens) Veggie Supreme Pizza (Medium) Total: $17.99 | Status: Ready | Pickup Order ORD-005 - David Wilson (Staten Island) BBQ Chicken Pizza (Large) + Buffalo Wings Total: $35.98 | Status: Out for Delivery | Delivery Data Structure Features: Customer Information: Name, phone, email, complete address Order Details: Order ID, date, delivery type, payment method, status Items: Pizza types, sizes, quantities, prices, detailed toppings Financial Data: Individual item prices and total amounts Your new local Atlas deployment is now ready with a fully functional pizza restaurant orders system!
删除本地Atlas部署
以下示例删除名为local5528 的本地Atlas部署。要运行该示例,必须禁用只读模式。
Delete my local Atlas deployment local5528.
I'll delete your local Atlas deployment local5528 for you. Ran ``atlas-local-delete-deployment``
atlas-local-delete-deployment 工具删除本地Atlas部署后, AI客户端可能会响应该部署已被删除。这是因为 atlas-local-delete-deployment 工具成功运行,并且本地Atlas部署不再存在。
性能优化
以下示例演示如何使用MongoDB MCP Server 识别和解决性能问题。提示的实际输出将取决于您的集群是否遇到性能问题,因此您的输出可能与示例不完全匹配。
这些示例使用 atlas-list-performance-advisor 工具从Performance Advisor中检索建议。
分析慢速查询
识别并分析性能缓慢的查询,以了解性能瓶颈。
注意
执行慢查询分析时, MongoDB MCP 服务器会检索慢查询示例,上限为 50 个查询。该示例最多包含 50 个符合提示中任何指定条件的最新慢速查询,以确保最佳性能和响应时间。
查询集群中的所有慢速查询:
Show me slow queries in my cluster
Ran ``atlas-list-performance-advisor`` I found extensive slow query logs for ClusterM10. Here's a summary of the slow queries detected: **Query Types and Patterns** **Most Common Slow Queries:** **Collection Statistics Queries ($collStats)** - **Collections**: SchemaAdvisor.Call to Action: Remove unnecessary indexes and SchemaAdvisor.Call to Action: Remove unnecessary indexes 2 - **Duration**: 30-100+ milliseconds - **Pattern**: These are MongoDB Automation Agent queries collecting storage statistics - **Frequency**: Very frequent (every 10 minutes) **Collection Statistics for mcp_db.new** - **Duration**: 30-60+ milliseconds - **Pattern**: Similar $collStats operations **Find Query on sample_db.test** - **Query**: {"find": "dbC", "filter": {"z": 10}} - **Duration**: 5 milliseconds - **Plan**: COLLSCAN (Collection Scan) - **Performance Issue**: Examined 10,001 documents but only returned 1 - This matches the index suggestion we saw earlier! **Key Performance Issues** **Inefficient Query on sample_db.test:** - The find query {"z": 10} is doing a full collection scan - This directly relates to the index suggestion we found: creating an index on field z would significantly improve this query - Currently examining 10,001 documents to find 1 result **Automation Agent Overhead:** - Many slow queries are from MongoDB Automation Agent collecting statistics - These are system-level operations that run regularly - Duration typically 30-100ms, which is acceptable for system operations **Recommendations** **Implement the Suggested Index:** - Create an index on {"z": 1} for the sample_db.test collection - This will eliminate the collection scan and dramatically improve the find query performance **Monitor Automation Agent Performance:** - The frequent $collStats operations are normal but worth monitoring - Consider if the collection statistics collection frequency can be optimized The most actionable finding is the inefficient query on sample_db.test that would benefit from the index suggestion we identified earlier.
查询一般性能问题:
How is my cluster performance?
Ran ``atlas-list-performance-advisor`` Based on the performance advisor data I retrieved for ClusterM10, here's an assessment of your cluster performance: **Overall Performance Assessment: Good with Room for Optimization** **✅ Positive Indicators** - **Query Response Times**: Most queries are completing in reasonable timeframes (30-100ms) - **No Critical Issues**: No extremely slow queries (>1000ms) detected - **Stable Operations**: Consistent performance patterns over time - **System Health**: MongoDB Automation Agent operations are functioning normally **⚠️ Areas for Improvement** **Inefficient Query Pattern:** - **Issue**: Find query {"z": 10} on sample_db.test collection - **Current**: Full collection scan examining 10,001 documents - **Impact**: Unnecessary resource consumption - **Solution**: Implement suggested index on field z **Resource Utilization:** - **CPU Usage**: Queries consuming 5-45+ CPU nanoseconds - **Memory**: Response sizes up to 602KB for statistics queries - **I/O**: Some queries showing storage read operations **📊 Performance Metrics Summary** - **Slow Query Count**: 50+ entries in recent logs - **Average Query Duration**: 30-100ms (acceptable range) - **Most Common Operations**: Collection statistics gathering - **Index Optimization Opportunity**: 1 significant improvement identified **🎯 Immediate Recommendations** **High Priority**: Create index {"z": 1} on sample_db.test collection - **Expected Impact**: Eliminate collection scan, reduce query time from 5ms to <1ms - **Resource Savings**: Reduce CPU and I/O usage significantly **Medium Priority**: Monitor automation agent performance - Current 30-100ms for $collStats operations is acceptable - Consider optimizing if frequency becomes problematic **📈 Performance Grade: B+** Your cluster is performing well overall, but implementing the suggested index would elevate it to an A-grade performance level. The main bottleneck is the inefficient query pattern that can be easily resolved with proper indexing. Would you like me to help you implement the suggested index or dive deeper into any specific performance metrics?
查询特定操作类型、持续时间或命名空间:
Show me slow writes for the past 24 hours in the movies collection
Ran ``atlas-list-performance-advisor`` Based on the slow query logs from the sample_db.test collection, I can see that there are no slow write operations in this collection over the past 24 hours. **Analysis Summary** **Slow Query Logs Found in sample_db.test:** - **1 slow query detected** - this is a READ operation (find query) - **No write operations found** in the slow query logs - Query occurred on 2025-09-30 at 14:52:56 UTC **Detailed Breakdown of the Slow Query** **Query Details:** .. code-block:: javascript db.dbC.find({ "z": 10 }) **Performance Characteristics:** - **Operation Type**: find (READ operation) - **Collection Scan**: ``"planSummary": "COLLSCAN"`` - **Documents Examined**: 10,001 documents - **Keys Examined**: 0 (no index usage) - **Duration**: 6 milliseconds - **Results Returned**: 1 document - **Planning Time**: 1.78ms - **Mirrored Query**: Yes (read preference: secondaryPreferred) **Performance Issues:** - **Missing Index**: The query on field ``z`` is performing a collection scan - **Inefficient**: Examining 10,001 documents to find 1 result - **Read Preference**: Using secondaryPreferred (mirrored query) **Key Findings** ✅ **Write Performance**: No slow write operations detected ⚠️ **Read Performance**: One slow read operation due to missing index
查询特定执行时间:
Show me queries longer than 5 seconds
Ran ``atlas-list-performance-advisor`` Based on the slow query logs from your MongoDB Atlas cluster, I can see that there are no queries longer than 5 seconds in your cluster performance data. **Analysis Summary** **Query Duration Analysis:** - All slow queries are under 5 seconds - the longest queries are around 117ms (0.117 seconds) - Most queries are between 30-100ms duration - No queries exceed 5 seconds (5000ms) **Detailed Breakdown of Slow Queries** The slow query logs show primarily MongoDB Automation Agent queries with the following characteristics: **Query Types Found:** **SchemaAdvisor Collections** - MongoDB's automated performance monitoring queries - **Duration**: 30-117ms - **Operations**: ``$collStats`` aggregation queries - **Purpose**: Automated performance analysis **smol.woofer Collection** - User application queries - **Duration**: 33ms - **Operations**: Aggregation with ``$match`` and ``$lookup`` - **Issue**: Collection scan (no index usage) **mcp_db.new Collection** - Storage statistics queries - **Duration**: 33-69ms - **Operations**: ``$collStats`` aggregation **sample_db.test Collection** - Find queries - **Duration**: 6ms - **Operations**: find with collection scan **Performance Status**: ✅ Good Your cluster is performing well with: - No queries longer than 5 seconds - All slow queries are under 200ms - Most queries are under 100ms The slow queries are primarily from MongoDB's automated monitoring systems, which is normal and expected behavior.
获取索引建议
使用Performance Advisor建议创建或删除索引,以提高性能。
What indexes should I create or drop to improve performance?
Ran ``atlas-list-performance-advisor`` Based on the performance advisor analysis of your MongoDB Atlas cluster "ClusterM10", here are the specific index recommendations: **Index to CREATE:** **High Priority - Create Index on sample_db.test collection:** - **Index**: { "z": 1 } - **Weight**: 1,020,000 bytes (≈ 1.02 MB saved per query) - **Impact**: This index will significantly improve performance for queries filtering on the z field - **Reason**: The slow query logs show a query {"z": 10} that performed a collection scan (COLLSCAN) and examined 10,001 documents to return just 1 result **Indexes to DROP:** **No Drop Recommendations Found** The performance advisor found no indexes that should be dropped at this time. **Additional Performance Insights:** **Schema Recommendations:** - **Remove Unnecessary Indexes**: You have collections with more than 30 indexes, which can impact write performance - **Reduce Document Size**: Some documents exceed 2MB, which can cause performance issues - **Avoid Unbounded Arrays**: Some collections have arrays with over 10,000 entries **Slow Query Analysis:** The slow query logs show that most slow queries are from MongoDB's internal automation agent running $collStats operations, which is normal. However, there's one user query that would benefit significantly from the suggested index: - **Query**: db.dbC.find({"z": 10}) - **Current Performance**: Collection scan examining 10,001 documents - **Expected Improvement**: With the suggested index, this should become an index scan examining only the matching documents **Recommended Action:** Create the index { "z": 1 } on the sample_db.test collection. This single index creation could save approximately 1MB of disk reads per query execution for queries filtering on the z field.
优化模式
获取改进数据库模式设计和结构的建议。
Suggest schema improvements for my database
Ran ``atlas-list-performance-advisor`` Based on the performance advisor analysis of your MongoDB Atlas cluster, here are the specific schema improvements recommended: **Critical Schema Issues Found:** **1. Remove Unnecessary Indexes** **Collections Affected:** - SchemaAdvisor.Call to Action: Remove unnecessary indexes - SchemaAdvisor.Call to Action: Remove unnecessary indexes 2 **Issue**: Both collections have more than 30 indexes each **Impact:** - Slower write operations - Increased storage overhead - Longer index maintenance time - Reduced memory efficiency **Recommendation:** - Audit all indexes in these collections - Remove indexes that are not actively used by your queries - Keep only indexes that support your application's query patterns - Consider compound indexes instead of multiple single-field indexes **2. Reduce Document Size** **Collections Affected:** - SchemaAdvisor.Call to Action: Reduce the size of documents - SchemaAdvisor.Call to Action: Reduce the size of documents 2 **Issue**: Documents larger than 2MB found **Impact:** - Documents cannot be stored in MongoDB (16MB limit) - Performance degradation for large documents - Increased memory usage - Slower network transfer **Recommendations:** - Split large documents into smaller, related documents - Use GridFS for storing large binary data - Normalize data by moving large arrays or objects to separate collections - Archive old data to reduce document size - Compress data where appropriate **3. Avoid Unbounded Arrays** **Collections Affected:** - SchemaAdvisor.Call to Action: Avoid using unbounded arrays in documents **Issue**: Arrays with over 10,000 entries detected **Impact:** - Poor query performance on array elements - Memory consumption issues - Index inefficiency - Difficult to maintain and update **Recommendations:** - Move large arrays to separate collections with references - Implement pagination for array access - Use embedded documents with limits (e.g., max 100 items) - Consider denormalization for frequently accessed array elements
后续步骤
有关配置和使用MongoDB MCP 服务器的更多信息: