このページでは、自然言語プロンプトを使用した実用的な例を通じてMongoDB MCP Server の使用方法を説明します。各例では、 AIクライアントに入力できるプロンプトと、大規模言語モデル(llm)から受信できる応答の例を示します。
このページの例は、 MongoDB MCP Server を使用した独自のやり取りの開始点として使用できます。
注意
具体的にどのような出力が与えられるかは、データ、 AIクライアント、および使用している LM によって異なります。これらの例では、組織ID、プロジェクトID、パスワードなどの秘密情報はリダクションされています。
前提条件
これらの例を実行中前に、次のことを確認してください。
調査するデータを含むMongoDBデプロイにアクセスします。
Atlas 管理
次の例は、 MongoDB MCPサーバーを使用して Atlas インフラストラクチャを管理する方法を示しています。
組織とプロジェクトの表示
Atlas アカウントの構造と利用可能なリソースの概要を取得します。
Show my Atlas organizations and projects
Ran ``atlas-list-orgs`` Ran ``atlas-list-projects`` Here are your MongoDB Atlas organizations and projects: **Organizations:** TestOrganization Organization ID: XXX **Projects:** TestProject Project ID: XXX Organization: TestOrganization Created: August 7, 2020 at 11:48:55 AM You have one organization containing one project. This project contains your TestCluster that we're currently connected to.
Atlas クラスターの作成
1 つのワークフローで、クラスターと必要なすべてのアクセス構成を含む新しい Atlasプロジェクトを設定します。
Create a new Atlas project named myNewProject and create a free cluster named myNewCluster in the new project, add access list for my current IP, create a database user named myNewUser with read and write access to the new cluster, and return the connection string for the new cluster
I'll create a new Atlas project, free cluster, and set up all the necessary access details for you. Ran ``atlas-create-project`` Great! I've created a new project named "myNewProject" in your MongoDB Atlas organization. Now let's list the projects to get the project ID for the new project: Ran ``atlas-list-projects`` Now I'll create a free cluster named "myNewCluster" in the new project: Ran ``atlas-create-free-cluster`` Now, I'll add your current IP address to the access list: Ran ``atlas-create-access-list`` Now let's create a database user named "myNewUser" with read and write access: Ran ``atlas-create-db-user`` Perfect! Everything has been set up successfully: **New Project Information:** Project Name: myNewProject Project ID: XXX **New Cluster Information:** Cluster Name: myNewCluster Region: US East (US_EAST_1) **Database User:** Username: myNewUser Password: XXX Access: Read and write to any database **Connection String:** mongodb+srv://myNewUser:XXX@myNewCluster.mongodb.net/ Your current IP address has been added to the access list, so you should be able to connect to the cluster from your current location. The cluster will take a few minutes to fully provision and be ready for connections.
データ探索
次の例は、 MongoDB MCP Server を使用してMongoDBデータを検索して理解する方法を示しています。
使用可能なデータベースの一覧表示
クラスターで利用可能なデータベースを検出し、その内容に関するインサイトを得ます。
Show my Atlas databases
Ran ``list-databases`` Here are the databases in your Atlas cluster "TestCluster": **Sample Databases:** sample_mflix - 138.4 MB (Movie database with films, theaters, users, comments) sample_training - 53.6 MB (Training data for learning MongoDB) sample_airbnb - 55.3 MB (Airbnb listings and reviews data) sample_analytics - 10.0 MB (Analytics and customer data) sample_restaurants - 6.8 MB (Restaurant and review data) sample_weatherdata - 2.7 MB (Weather observation data) sample_geospatial - 1.3 MB (Geospatial data examples) sample_supplies - 1.1 MB (Supply chain data) sample_guides - 41 KB (Getting started guides) **System Databases:** local - 4.1 GB (MongoDB internal database) admin - 365 KB (Administrative database) Your cluster contains MongoDB's sample datasets, which are great for learning and testing. The largest databases are sample_mflix (movies), sample_training, and sample_airbnb. Would you like me to explore any specific database or show you the collections within one of them?
周辺コレクションを探す
特定のデータベースを深く調べて、その構造とコレクションを理解します。
Show the collections in the Atlas sample_mflix database
Ran ``list-collections`` Here are the collections in the sample_mflix database: **Core Collections:** movies - Main collection containing movie data (titles, genres, cast, ratings, etc.) users - User profiles and account information comments - User comments and reviews on movies theaters - Movie theater locations and details sessions - User session data **Additional Collections:** embedded_movies - Movies collection with embedded/denormalized data structure The movies collection contains detailed information about films including plot, cast, directors, ratings, release dates, and genres. The comments collection links to movies and users to store user reviews and discussions. Would you like me to explore any specific collection in more detail, such as showing sample documents or collection statistics?
データ分析
MongoDBデータを分析して、パターンと傾向を特定します。
Summarize the Atlas movies collection by thrillers
I'll help you analyze the Atlas movies collection focusing on thrillers. First, let me check what databases are available in your connected cluster. Ran ``list-databases`` Perfect! I can see you have the sample_mflix database which contains the movies collection. Let me check the collections in that database: Ran ``list-collections`` Great! Now let me analyze the movies collection to find thrillers and provide a summary. First, let me check the schema to understand how genres are stored: Ran ``collection-schema`` Perfect! I can see that genres are stored as an array of strings. Now let me get a count of thriller movies and then perform an aggregation to summarize them: Ran ``count`` Now let me create a comprehensive summary of thriller movies using aggregation: Ran ``aggregate`` I see there are some data quality issues. Let me get better data by filtering for movies with numeric ratings: Ran ``aggregate`` Now let me get some more summary statistics by decade: Ran ``aggregate`` Here's a comprehensive analysis of thriller movies in your Atlas sample_mflix collection: **Overall Statistics** Total Thriller Movies: 2,454 films Time Span: 1924 to 2011 Average Rating: 6.3/10 (IMDb) Average Runtime: 105 minutes Total IMDb Votes: 85.8 million across all thrillers **Top-Rated Thriller Movies** Drishyam (2013) - 8.9/10 - Indian thriller directed by Jeethu Joseph Tinker Tailor Soldier Spy (1979) - 8.7/10 - British spy thriller Rear Window (1954) - 8.6/10 - Alfred Hitchcock classic The Silence of the Lambs (1991) - 8.6/10 - Psychological thriller Se7en (1995) - 8.6/10 - David Fincher crime thriller The Usual Suspects (1995) - 8.6/10 - Neo-noir mystery Léon: The Professional (1994) - 8.6/10 - Action thriller Psycho (1960) - 8.6/10 - Another Hitchcock masterpiece Memento (2000) - 8.5/10 - Christopher Nolan psychological thriller **Trends by Decade** 1920s-1950s: Few thrillers but high quality (avg 7.2-7.8 rating) 1960s-1970s: Growing popularity, still maintaining quality 1980s: Significant growth (200 films), quality decline begins 1990s: Peak production era (417 films), avg 6.3 rating 2000s: Highest volume (854 films), avg 6.24 rating 2010s: 730 films, lowest average rating (6.12) The data shows that while thriller production increased dramatically from the 1980s onward, the average quality (as measured by IMDb ratings) has generally declined, suggesting the genre became more commercialized with higher volume but potentially lower artistic standards in recent decades.
データベース操作
次の例は、 MongoDB MCP Server を使用して一般的なデータベース操作を実行する方法を示しています。
新しいドキュメントの挿入
コレクションに新しいデータを追加します。
Add a document to the movies_copy collection
I'll add a new document to the movies_copy collection. Let me create a sample movie document with the same structure as the existing movies: Ran ``insert-many`` Perfect! I've successfully added a new movie document to the movies_copy collection. Here's what I added: **New Movie Added:** Title: "The AI Thriller" Genres: Thriller, Sci-Fi, Drama Year: 2024 Runtime: 118 minutes IMDb Rating: 7.8/10 (125,000 votes) Cast: John Smith, Sarah Johnson, Michael Chen Director: Alex Rodriguez Rating: PG-13 The document was inserted with ID: 684b15fb5084ddef1f265e8f Let me verify the collection now has one additional document: Ran ``count`` Confirmed! The movies_copy collection now contains 21,350 documents (up from 21,349), including your new thriller movie about AI consciousness.
新しいコレクションの作成
新しいコレクションを作成してカスタマイズします。
Create a new collection to store movie purchases data that includes geospatial and timeseries fields
I'll create a new collection for storing movie purchases data with geospatial and timeseries fields. Let me first create the collection and then add some sample documents with the appropriate structure. Ran ``create-collection`` Now I'll insert sample documents that include geospatial data (customer location, theater location) and timeseries data (purchase timestamp, viewing timestamp): Ran ``insert-many`` Perfect! I've created the movie_purchases collection with sample data that includes both geospatial and timeseries fields. Here's what the collection contains: **Collection Structure** **Geospatial Fields:** customer_location: GeoJSON Point with customer's coordinates theater_location: GeoJSON Point with theater coordinates customer_address: Human-readable address information **Timeseries Fields:** purchase_timestamp: When the purchase was made viewing_timestamp: When the movie was actually watched viewing_duration_minutes: How long the customer watched completion_percentage: Percentage of movie completed **Sample Data Added:** Purchase 1: Customer in San Francisco bought "The Shawshank Redemption" rental for $12.99 Purchase 2: Customer in New York purchased "Pulp Fiction" for $15.99 Purchase 3: Customer in Los Angeles rented "Se7en" for $9.99 Now let me create a geospatial index to optimize location-based queries: Ran ``create-index`` Excellent! The movie_purchases collection is now ready with: - Geospatial capabilities - 2dsphere index for location-based queries - Timeseries optimization - Index on purchase timestamps - Sample data - 3 purchase records across different US cities This collection can now support queries like: - Finding purchases within a certain radius of a location - Analyzing purchase patterns over time - Correlating viewing behavior with geographic regions - Time-based analytics and trending
既存のコレクションのコピー
既存のコレクションのバックアップまたは重複を作成します。
Make a copy of the movies collection named movies_copy
I'll create a copy of the movies collection named movies_copy. I'll use an aggregation pipeline with $out to copy all documents from the movies collection to a new movies_copy collection. Ran ``aggregate`` Perfect! I've successfully created a copy of the movies collection named movies_copy. Let me verify the copy was created correctly by checking the document count: Ran ``count`` Found 21349 documents in the collection "movies_copy" Excellent! The movies_copy collection has been successfully created with all 21,349 documents from the original movies collection. The copy operation completed successfully and you now have an identical backup of your movies collection.
データのエクスポートと共有
次の例では、外部ツールで共有または処理するためにクエリ結果をエクスポートする方法を示しています。
包括的なレポートを生成し、他のアプリケーションで使用したり、チームメンバーと共有したりするためにエクスポートします。
エクスポートされた結果は、MCP Server を実行するコンピューター上のファイルに保存されます。AIクライアントアプリケーションを使用して、exported-dataリソースを介してエクスポートされたデータにアクセスすることもできます。
エクスポートファイルは、exportPath 構成オプションで指定されたファイルシステム パスに保存されます。エクスポートファイルは、exportTimeout 構成オプションで指定された期間後に削除の対象となります。詳細については、 MongoDB MCP Server からのデータのエクスポート を参照してください。
Summarize and export the contents of the movies collection.
I'll help you summarize and export the contents of the movies collection from your connected MongoDB Atlas cluster. Let's do this in two steps: 1. First, I'll create a summary of the movies collection using an aggregation pipeline. 2. Then I'll export the results using the export tool. Let's start by getting a summary of the movies collection: Ran ``aggregate`` Now that we have a summary of the movies collection, let's create a more focused dataset to export. Let's get the top 10 highest-rated movies: Ran ``aggregate`` Now, let's export this data using the MCP Server export tool: Ran ``export`` I've successfully summarized and exported the contents of the movies collection from your MongoDB Atlas cluster. Here's a breakdown of what I did: **Movies Collection Summary** **Collection Overview:** The collection contains 21,349 movie documents Time span: 1896 to 2016 (120 years of cinema history) Average IMDb rating: 6.66 out of 10 Average runtime: ~104 minutes **Top Genres:** Drama (12,385 movies) Comedy (6,532 movies) Romance (3,318 movies) Crime (2,457 movies) Thriller (2,454 movies) **Top Directors:** Woody Allen (40 movies) Martin Scorsese (32 movies) Takashi Miike (31 movies) Steven Spielberg (29 movies) John Ford (29 movies) **Top Languages:** English (14,639 movies) French (2,405 movies) Spanish (1,600 movies) German (1,377 movies) Italian (1,228 movies) **Highest Rated Movies:** Band of Brothers (2001) - 9.6/10 Planet Earth (2006) - 9.5/10 The Civil War (1990) - 9.4/10 A Brave Heart: The Lizzie Velasquez Story (2015) - 9.4/10 Cosmos (1980) - 9.3/10 **Export Details** The data has been exported to a JSON file in relaxed format. You can access the exported file at: /Users/XXX/.mongodb/mongodb-mcp/exports/68b8c10a6d0291128800be0b/sample_mflix.movies.68b8c2496d0291128800be0d.json The export includes: - Genre summary with counts and average ratings - Runtime statistics (min, max, average) - Year-by-decade summary with counts and ratings - Top directors with film counts - Language distribution - Top 10 highest-rated movies This export provides a comprehensive overview of the movies collection that can be easily shared or imported into other tools for further analysis or visualization.
Atlas のローカル配置
次の例は、 MongoDB MCP Server を使用してローカル Atlas 配置を操作し、作成する方法を示しています。
Atlas のローカル配置の一覧表示
次の例では、 Atlas のローカル配置を示しています。
List all local Atlas deployments.
I'll list all the local Atlas deployments for you. Ran ``atlas-local-list-deployments`` Here are all your local Atlas deployments: Found 1 deployment: Deployment Name State MongoDB Version local7356 Running 8.2.1 You currently have one Atlas Local deployment named local7356 that is running MongoDB version 8.2.1.
ローカル Atlas 配置およびリスト データベースへの接続
次の例では、 local7356 という名前の Atlas のローカル配置に接続し、データベースを一覧表示します。
Connect to my local Atlas deployment local7356 and list the databases.
I'll connect to your local Atlas deployment local7356 and then list the databases. Ran ``atlas-local-connect-deployment`` Now I'll list the databases in this deployment: Ran ``list-databases`` Successfully connected to Atlas Local deployment local7356! Here are the databases in this deployment: Found 4 databases: admin - 262,144 bytes (256 KB) config - 237,568 bytes (~232 KB) local - 1,163,264 bytes (~1.1 MB) testdb - 438,272 bytes (~428 KB) Your local Atlas deployment contains: System databases: admin, config, and local (standard MongoDB system databases) User database: testdb (your custom database) The local database has grown to about 1.1 MB, which is normal for local operations and metadata storage. Would you like me to explore any of these databases further, such as showing the collections in testdb?
データベースとコレクションを使用した新しいローカル Atlas 配置の作成
次の例では、データベースと例コレクションを含む新しいローカル Atlas 配置を作成します。の例を実行するには、読み取り専用モードを無効にする必要があります。
Create a new local Atlas deployment, connect to it, create a collection with sample pizza orders data, and then retrieve the sample data.
I'll help you create a new local Atlas deployment, connect to it, create a collection with sample pizza orders data, and then retrieve it. Let me break this down into steps: Created 4 todos Now let me start with the first task: Starting (1/4) Create new local Atlas deployment Ran ``atlas-local-create-deployment`` Great! Created deployment local5528. Now let me mark that as completed and connect to it: Starting (2/4) Connect to new deployment Ran ``atlas-local-connect-deployment`` Perfect! Now let me create the pizza orders collection with sample data: Starting (3/4) Create pizza orders collection Ran ``create-collection`` Now let me insert sample pizza orders data: Great! Now let me retrieve and display the sample data: Starting (4/4) Retrieve and display sample data Ran ``find`` Completed (4/4) Retrieve and display sample data Perfect! I've successfully completed all the tasks. Here's a summary of what was accomplished: Task Completed Successfully 1. Created New Local Atlas Deployment Deployment Name: local5528 Container ID: XXX Status: Running 2. Connected to the New Deployment Successfully connected to deployment local5528 3. Created Pizza Orders Collection Database: pizzarestaurant Collection: orders Documents Inserted: 5 pizza orders 4. Sample Pizza Orders Data Created The collection now contains 5 realistic pizza orders with the following details: Order ORD-001 - John Smith (New York) Margherita Pizza (Large) + Pepperoni Pizza (Medium) Total: $34.98 | Status: Completed | Delivery Order ORD-002 - Sarah Johnson (Brooklyn) 2x Hawaiian Pizza (Large) + Caesar Salad Total: $50.97 | Status: Preparing | Pickup Order ORD-003 - Mike Chen (Manhattan) Meat Lovers Pizza (Extra Large) + 2x Garlic Bread + 3x Soda Total: $40.95 | Status: Delivered | Delivery Order ORD-004 - Emily Davis (Queens) Veggie Supreme Pizza (Medium) Total: $17.99 | Status: Ready | Pickup Order ORD-005 - David Wilson (Staten Island) BBQ Chicken Pizza (Large) + Buffalo Wings Total: $35.98 | Status: Out for Delivery | Delivery Data Structure Features: Customer Information: Name, phone, email, complete address Order Details: Order ID, date, delivery type, payment method, status Items: Pizza types, sizes, quantities, prices, detailed toppings Financial Data: Individual item prices and total amounts Your new local Atlas deployment is now ready with a fully functional pizza restaurant orders system!
Atlas のローカル配置の削除
次の例では、 という名前の Atlaslocal5528 配置を削除します。の例を実行するには、読み取り専用モードを無効にする必要があります。
Delete my local Atlas deployment local5528.
I'll delete your local Atlas deployment local5528 for you. Ran ``atlas-local-delete-deployment``
atlas-local-delete-deployment ツールがローカル Atlas 配置を削除すると、 AIクライアントは配置がすでに削除されていると応答する可能性があります。これは、atlas-local-delete-deployment ツールが正常に実行され、Atlas のローカル配置が存在しなくなります。
パフォーマンスの最適化
次の例は、 MongoDB MCP Server を使用してパフォーマンスの問題を識別して解決する方法を示しています。プロンプトの実際の出力は、クラスターでパフォーマンスの問題が発生しているかどうかによって異なります。そのため、出力が例と完全に一致しない場合があります。
これらの例では、 atlas-list- performance-advisor ツールを使用して、Performance Advisor から推奨事項を取得します。
低速クエリの分析
パフォーマンスの低いクエリを特定して分析し、パフォーマンスのボトルネックを理解します。
注意
低速クエリ分析を実行する際、 MongoDB MCP Server は低速クエリのサンプルを取得し、上限は 50 クエリに限定します。サンプルには、最適なパフォーマンスと応答時間を確保するために、プロンプトで指定された条件に一致する直近の低速クエリが最大 50 含まれます。
クラスター内のすべての低速クエリをクエリします。
Show me slow queries in my cluster
Ran ``atlas-list-performance-advisor`` I found extensive slow query logs for ClusterM10. Here's a summary of the slow queries detected: **Query Types and Patterns** **Most Common Slow Queries:** **Collection Statistics Queries ($collStats)** - **Collections**: SchemaAdvisor.Call to Action: Remove unnecessary indexes and SchemaAdvisor.Call to Action: Remove unnecessary indexes 2 - **Duration**: 30-100+ milliseconds - **Pattern**: These are MongoDB Automation Agent queries collecting storage statistics - **Frequency**: Very frequent (every 10 minutes) **Collection Statistics for mcp_db.new** - **Duration**: 30-60+ milliseconds - **Pattern**: Similar $collStats operations **Find Query on sample_db.test** - **Query**: {"find": "dbC", "filter": {"z": 10}} - **Duration**: 5 milliseconds - **Plan**: COLLSCAN (Collection Scan) - **Performance Issue**: Examined 10,001 documents but only returned 1 - This matches the index suggestion we saw earlier! **Key Performance Issues** **Inefficient Query on sample_db.test:** - The find query {"z": 10} is doing a full collection scan - This directly relates to the index suggestion we found: creating an index on field z would significantly improve this query - Currently examining 10,001 documents to find 1 result **Automation Agent Overhead:** - Many slow queries are from MongoDB Automation Agent collecting statistics - These are system-level operations that run regularly - Duration typically 30-100ms, which is acceptable for system operations **Recommendations** **Implement the Suggested Index:** - Create an index on {"z": 1} for the sample_db.test collection - This will eliminate the collection scan and dramatically improve the find query performance **Monitor Automation Agent Performance:** - The frequent $collStats operations are normal but worth monitoring - Consider if the collection statistics collection frequency can be optimized The most actionable finding is the inefficient query on sample_db.test that would benefit from the index suggestion we identified earlier.
一般的なパフォーマンスの問題のクエリは以下のようになります。
How is my cluster performance?
Ran ``atlas-list-performance-advisor`` Based on the performance advisor data I retrieved for ClusterM10, here's an assessment of your cluster performance: **Overall Performance Assessment: Good with Room for Optimization** **✅ Positive Indicators** - **Query Response Times**: Most queries are completing in reasonable timeframes (30-100ms) - **No Critical Issues**: No extremely slow queries (>1000ms) detected - **Stable Operations**: Consistent performance patterns over time - **System Health**: MongoDB Automation Agent operations are functioning normally **⚠️ Areas for Improvement** **Inefficient Query Pattern:** - **Issue**: Find query {"z": 10} on sample_db.test collection - **Current**: Full collection scan examining 10,001 documents - **Impact**: Unnecessary resource consumption - **Solution**: Implement suggested index on field z **Resource Utilization:** - **CPU Usage**: Queries consuming 5-45+ CPU nanoseconds - **Memory**: Response sizes up to 602KB for statistics queries - **I/O**: Some queries showing storage read operations **📊 Performance Metrics Summary** - **Slow Query Count**: 50+ entries in recent logs - **Average Query Duration**: 30-100ms (acceptable range) - **Most Common Operations**: Collection statistics gathering - **Index Optimization Opportunity**: 1 significant improvement identified **🎯 Immediate Recommendations** **High Priority**: Create index {"z": 1} on sample_db.test collection - **Expected Impact**: Eliminate collection scan, reduce query time from 5ms to <1ms - **Resource Savings**: Reduce CPU and I/O usage significantly **Medium Priority**: Monitor automation agent performance - Current 30-100ms for $collStats operations is acceptable - Consider optimizing if frequency becomes problematic **📈 Performance Grade: B+** Your cluster is performing well overall, but implementing the suggested index would elevate it to an A-grade performance level. The main bottleneck is the inefficient query pattern that can be easily resolved with proper indexing. Would you like me to help you implement the suggested index or dive deeper into any specific performance metrics?
特定の操作タイプ、期間、または名前空間のクエリ。
Show me slow writes for the past 24 hours in the movies collection
Ran ``atlas-list-performance-advisor`` Based on the slow query logs from the sample_db.test collection, I can see that there are no slow write operations in this collection over the past 24 hours. **Analysis Summary** **Slow Query Logs Found in sample_db.test:** - **1 slow query detected** - this is a READ operation (find query) - **No write operations found** in the slow query logs - Query occurred on 2025-09-30 at 14:52:56 UTC **Detailed Breakdown of the Slow Query** **Query Details:** .. code-block:: javascript db.dbC.find({ "z": 10 }) **Performance Characteristics:** - **Operation Type**: find (READ operation) - **Collection Scan**: ``"planSummary": "COLLSCAN"`` - **Documents Examined**: 10,001 documents - **Keys Examined**: 0 (no index usage) - **Duration**: 6 milliseconds - **Results Returned**: 1 document - **Planning Time**: 1.78ms - **Mirrored Query**: Yes (read preference: secondaryPreferred) **Performance Issues:** - **Missing Index**: The query on field ``z`` is performing a collection scan - **Inefficient**: Examining 10,001 documents to find 1 result - **Read Preference**: Using secondaryPreferred (mirrored query) **Key Findings** ✅ **Write Performance**: No slow write operations detected ⚠️ **Read Performance**: One slow read operation due to missing index
特定の実行時間を照会するクエリ。
Show me queries longer than 5 seconds
Ran ``atlas-list-performance-advisor`` Based on the slow query logs from your MongoDB Atlas cluster, I can see that there are no queries longer than 5 seconds in your cluster performance data. **Analysis Summary** **Query Duration Analysis:** - All slow queries are under 5 seconds - the longest queries are around 117ms (0.117 seconds) - Most queries are between 30-100ms duration - No queries exceed 5 seconds (5000ms) **Detailed Breakdown of Slow Queries** The slow query logs show primarily MongoDB Automation Agent queries with the following characteristics: **Query Types Found:** **SchemaAdvisor Collections** - MongoDB's automated performance monitoring queries - **Duration**: 30-117ms - **Operations**: ``$collStats`` aggregation queries - **Purpose**: Automated performance analysis **smol.woofer Collection** - User application queries - **Duration**: 33ms - **Operations**: Aggregation with ``$match`` and ``$lookup`` - **Issue**: Collection scan (no index usage) **mcp_db.new Collection** - Storage statistics queries - **Duration**: 33-69ms - **Operations**: ``$collStats`` aggregation **sample_db.test Collection** - Find queries - **Duration**: 6ms - **Operations**: find with collection scan **Performance Status**: ✅ Good Your cluster is performing well with: - No queries longer than 5 seconds - All slow queries are under 200ms - Most queries are under 100ms The slow queries are primarily from MongoDB's automated monitoring systems, which is normal and expected behavior.
インデックスの推奨事項を取得する
Performance Advisor を使用して、パフォーマンスを向上させるために作成または削除するインデックスの提案を提案します。
What indexes should I create or drop to improve performance?
Ran ``atlas-list-performance-advisor`` Based on the performance advisor analysis of your MongoDB Atlas cluster "ClusterM10", here are the specific index recommendations: **Index to CREATE:** **High Priority - Create Index on sample_db.test collection:** - **Index**: { "z": 1 } - **Weight**: 1,020,000 bytes (≈ 1.02 MB saved per query) - **Impact**: This index will significantly improve performance for queries filtering on the z field - **Reason**: The slow query logs show a query {"z": 10} that performed a collection scan (COLLSCAN) and examined 10,001 documents to return just 1 result **Indexes to DROP:** **No Drop Recommendations Found** The performance advisor found no indexes that should be dropped at this time. **Additional Performance Insights:** **Schema Recommendations:** - **Remove Unnecessary Indexes**: You have collections with more than 30 indexes, which can impact write performance - **Reduce Document Size**: Some documents exceed 2MB, which can cause performance issues - **Avoid Unbounded Arrays**: Some collections have arrays with over 10,000 entries **Slow Query Analysis:** The slow query logs show that most slow queries are from MongoDB's internal automation agent running $collStats operations, which is normal. However, there's one user query that would benefit significantly from the suggested index: - **Query**: db.dbC.find({"z": 10}) - **Current Performance**: Collection scan examining 10,001 documents - **Expected Improvement**: With the suggested index, this should become an index scan examining only the matching documents **Recommended Action:** Create the index { "z": 1 } on the sample_db.test collection. This single index creation could save approximately 1MB of disk reads per query execution for queries filtering on the z field.
スキーマの最適化
データベーススキーマ設計と構造を改善するための推奨事項を取得します。
Suggest schema improvements for my database
Ran ``atlas-list-performance-advisor`` Based on the performance advisor analysis of your MongoDB Atlas cluster, here are the specific schema improvements recommended: **Critical Schema Issues Found:** **1. Remove Unnecessary Indexes** **Collections Affected:** - SchemaAdvisor.Call to Action: Remove unnecessary indexes - SchemaAdvisor.Call to Action: Remove unnecessary indexes 2 **Issue**: Both collections have more than 30 indexes each **Impact:** - Slower write operations - Increased storage overhead - Longer index maintenance time - Reduced memory efficiency **Recommendation:** - Audit all indexes in these collections - Remove indexes that are not actively used by your queries - Keep only indexes that support your application's query patterns - Consider compound indexes instead of multiple single-field indexes **2. Reduce Document Size** **Collections Affected:** - SchemaAdvisor.Call to Action: Reduce the size of documents - SchemaAdvisor.Call to Action: Reduce the size of documents 2 **Issue**: Documents larger than 2MB found **Impact:** - Documents cannot be stored in MongoDB (16MB limit) - Performance degradation for large documents - Increased memory usage - Slower network transfer **Recommendations:** - Split large documents into smaller, related documents - Use GridFS for storing large binary data - Normalize data by moving large arrays or objects to separate collections - Archive old data to reduce document size - Compress data where appropriate **3. Avoid Unbounded Arrays** **Collections Affected:** - SchemaAdvisor.Call to Action: Avoid using unbounded arrays in documents **Issue**: Arrays with over 10,000 entries detected **Impact:** - Poor query performance on array elements - Memory consumption issues - Index inefficiency - Difficult to maintain and update **Recommendations:** - Move large arrays to separate collections with references - Implement pagination for array access - Use embedded documents with limits (e.g., max 100 items) - Consider denormalization for frequently accessed array elements
次のステップ
MongoDB MCP Server の構成と使用の詳細については、次を参照してください。
利用可能なすべての構成オプションと接続方法の詳細については、こちらを参照してください。 | |
ツールとその機能を網羅したリストで、 | |
MCP サーバーを安全に配置して使用する方法を理解します。 |