Docs Menu
Docs Home
/ /

MongoDB MCP 서버 사용 예시

이 페이지에서는 언어 프롬프트가 포함된 실용적인 예제를 통해 MongoDB MCP 서버를 사용하는 방법을 보여줍니다. 각 예시 AI 클라이언트 에 입력할 수 있는 프롬프트와 LLM(대형 언어 모델)에서 받을 수 있는 응답의 예시 를 보여줍니다.

이 페이지의 예제를 MongoDB MCP 서버와의 상호 작용을 위한 출발점으로 사용할 수 있습니다.

참고

정확한 출력은 데이터, AI 클라이언트, 사용 중인 LLM에 따라 달라집니다. 이 예시에서는 조직 ID, 프로젝트 ID, 비밀번호와 같은 비공개 정보가 삭제되었습니다.

이러한 예제를 실행 전에 다음 사항이 있는지 확인하세요.

  • 작동 중인 MongoDB MCP 서버 구성

  • 탐색할 데이터가 있는 MongoDB deployment 에 액세스합니다.

  • 배포서버 에 로드된 샘플 데이터입니다.

다음 예제는 MongoDB MCP 서버 사용하여 Atlas 인프라를 관리 방법을 보여줍니다.

Atlas 계정 구조와 사용 가능한 리소스에 대한 개요를 확인하세요.

Show my Atlas organizations and projects
Ran ``atlas-list-orgs``
Ran ``atlas-list-projects``
Here are your MongoDB Atlas organizations and projects:
**Organizations:**
TestOrganization
Organization ID: XXX
**Projects:**
TestProject
Project ID: XXX
Organization: TestOrganization
Created: August 7, 2020 at 11:48:55 AM
You have one organization containing one project. This
project contains your TestCluster that we're currently
connected to.

단일 워크플로에서 클러스터 와 필요한 모든 액세스 구성을 사용하여 새 Atlas 프로젝트 설정하세요.

Create a new Atlas project named myNewProject and create a
free cluster named myNewCluster in the new project, add
access list for my current IP, create a database user named
myNewUser with read and write access to the new cluster, and
return the connection string for the new cluster
I'll create a new Atlas project, free cluster, and set up
all the necessary access details for you.
Ran ``atlas-create-project``
Great! I've created a new project named "myNewProject"
in your MongoDB Atlas organization. Now let's list the
projects to get the project ID for the new project:
Ran ``atlas-list-projects``
Now I'll create a free cluster named "myNewCluster" in the
new project:
Ran ``atlas-create-free-cluster``
Now, I'll add your current IP address to the access list:
Ran ``atlas-create-access-list``
Now let's create a database user named "myNewUser" with read
and write access:
Ran ``atlas-create-db-user``
Perfect! Everything has been set up successfully:
**New Project Information:**
Project Name: myNewProject
Project ID: XXX
**New Cluster Information:**
Cluster Name: myNewCluster
Region: US East (US_EAST_1)
**Database User:**
Username: myNewUser
Password: XXX
Access: Read and write to any database
**Connection String:**
mongodb+srv://myNewUser:XXX@myNewCluster.mongodb.net/
Your current IP address has been added to the access list,
so you should be able to connect to the cluster from your
current location. The cluster will take a few minutes to
fully provision and be ready for connections.

다음 예제는 MongoDB MCP 서버를 사용하여 MongoDB 데이터를 탐색하고 이해하는 방법을 보여줍니다.

클러스터 에서 사용할 수 있는 데이터베이스를 검색하고 해당 콘텐츠에 대한 인사이트를 얻으세요.

Show my Atlas databases
Ran ``list-databases``
Here are the databases in your Atlas cluster "TestCluster":
**Sample Databases:**
sample_mflix - 138.4 MB (Movie database with films, theaters, users, comments)
sample_training - 53.6 MB (Training data for learning MongoDB)
sample_airbnb - 55.3 MB (Airbnb listings and reviews data)
sample_analytics - 10.0 MB (Analytics and customer data)
sample_restaurants - 6.8 MB (Restaurant and review data)
sample_weatherdata - 2.7 MB (Weather observation data)
sample_geospatial - 1.3 MB (Geospatial data examples)
sample_supplies - 1.1 MB (Supply chain data)
sample_guides - 41 KB (Getting started guides)
**System Databases:**
local - 4.1 GB (MongoDB internal database)
admin - 365 KB (Administrative database)
Your cluster contains MongoDB's sample datasets, which are
great for learning and testing. The largest databases are
sample_mflix (movies), sample_training, and sample_airbnb.
Would you like me to explore any specific database or show
you the collections within one of them?

특정 데이터베이스 자세히 살펴보고 해당 데이터베이스의 구조와 컬렉션을 파악하세요.

Show the collections in the Atlas sample_mflix database
Ran ``list-collections``
Here are the collections in the sample_mflix database:
**Core Collections:**
movies - Main collection containing movie data
(titles, genres, cast, ratings, etc.)
users - User profiles and account information
comments - User comments and reviews on movies
theaters - Movie theater locations and details
sessions - User session data
**Additional Collections:**
embedded_movies - Movies collection with embedded/denormalized data structure
The movies collection contains detailed information
about films including plot, cast, directors, ratings,
release dates, and genres. The comments collection links to
movies and users to store user reviews and discussions.
Would you like me to explore any specific collection in more
detail, such as showing sample documents or collection
statistics?

MongoDB 데이터를 분석하여 패턴과 추세를 식별합니다.

Summarize the Atlas movies collection by thrillers
I'll help you analyze the Atlas movies collection focusing
on thrillers. First, let me check what databases are
available in your connected cluster.
Ran ``list-databases``
Perfect! I can see you have the sample_mflix database which
contains the movies collection. Let me check the collections
in that database:
Ran ``list-collections``
Great! Now let me analyze the movies collection to find
thrillers and provide a summary. First, let me check the
schema to understand how genres are stored:
Ran ``collection-schema``
Perfect! I can see that genres are stored as an array of
strings. Now let me get a count of thriller movies and then
perform an aggregation to summarize them:
Ran ``count``
Now let me create a comprehensive summary of thriller movies
using aggregation:
Ran ``aggregate``
I see there are some data quality issues. Let me get better
data by filtering for movies with numeric ratings:
Ran ``aggregate``
Now let me get some more summary statistics by decade:
Ran ``aggregate``
Here's a comprehensive analysis of thriller movies in your
Atlas sample_mflix collection:
**Overall Statistics**
Total Thriller Movies: 2,454 films
Time Span: 1924 to 2011
Average Rating: 6.3/10 (IMDb)
Average Runtime: 105 minutes
Total IMDb Votes: 85.8 million across all thrillers
**Top-Rated Thriller Movies**
Drishyam (2013) - 8.9/10 - Indian thriller directed by Jeethu Joseph
Tinker Tailor Soldier Spy (1979) - 8.7/10 - British spy thriller
Rear Window (1954) - 8.6/10 - Alfred Hitchcock classic
The Silence of the Lambs (1991) - 8.6/10 - Psychological thriller
Se7en (1995) - 8.6/10 - David Fincher crime thriller
The Usual Suspects (1995) - 8.6/10 - Neo-noir mystery
Léon: The Professional (1994) - 8.6/10 - Action thriller
Psycho (1960) - 8.6/10 - Another Hitchcock masterpiece
Memento (2000) - 8.5/10 - Christopher Nolan psychological thriller
**Trends by Decade**
1920s-1950s: Few thrillers but high quality (avg 7.2-7.8 rating)
1960s-1970s: Growing popularity, still maintaining quality
1980s: Significant growth (200 films), quality decline begins
1990s: Peak production era (417 films), avg 6.3 rating
2000s: Highest volume (854 films), avg 6.24 rating
2010s: 730 films, lowest average rating (6.12)
The data shows that while thriller production increased
dramatically from the 1980s onward, the average quality (as
measured by IMDb ratings) has generally declined, suggesting
the genre became more commercialized with higher volume but
potentially lower artistic standards in recent decades.

다음 예제는 MongoDB MCP 서버를 사용하여 일반적인 데이터베이스 작업을 수행하는 방법을 보여줍니다.

컬렉션에 새 데이터를 추가합니다.

Add a document to the movies_copy collection
I'll add a new document to the movies_copy collection. Let
me create a sample movie document with the same structure as
the existing movies:
Ran ``insert-many``
Perfect! I've successfully added a new movie document to the
movies_copy collection. Here's what I added:
**New Movie Added:**
Title: "The AI Thriller"
Genres: Thriller, Sci-Fi, Drama
Year: 2024
Runtime: 118 minutes
IMDb Rating: 7.8/10 (125,000 votes)
Cast: John Smith, Sarah Johnson, Michael Chen
Director: Alex Rodriguez
Rating: PG-13
The document was inserted with ID: 684b15fb5084ddef1f265e8f
Let me verify the collection now has one additional
document:
Ran ``count``
Confirmed! The movies_copy collection now contains 21,350
documents (up from 21,349), including your new thriller
movie about AI consciousness.

새 컬렉션을 만들고 사용자 지정합니다.

Create a new collection to store movie purchases data that
includes geospatial and timeseries fields
I'll create a new collection for storing movie purchases
data with geospatial and timeseries fields. Let me first
create the collection and then add some sample documents
with the appropriate structure.
Ran ``create-collection``
Now I'll insert sample documents that include geospatial
data (customer location, theater location) and timeseries
data (purchase timestamp, viewing timestamp):
Ran ``insert-many``
Perfect! I've created the movie_purchases collection with
sample data that includes both geospatial and timeseries
fields. Here's what the collection contains:
**Collection Structure**
**Geospatial Fields:**
customer_location: GeoJSON Point with customer's coordinates
theater_location: GeoJSON Point with theater coordinates
customer_address: Human-readable address information
**Timeseries Fields:**
purchase_timestamp: When the purchase was made
viewing_timestamp: When the movie was actually watched
viewing_duration_minutes: How long the customer watched
completion_percentage: Percentage of movie completed
**Sample Data Added:**
Purchase 1: Customer in San Francisco bought "The Shawshank Redemption" rental for $12.99
Purchase 2: Customer in New York purchased "Pulp Fiction" for $15.99
Purchase 3: Customer in Los Angeles rented "Se7en" for $9.99
Now let me create a geospatial index to optimize
location-based queries:
Ran ``create-index``
Excellent! The movie_purchases collection is now ready with:
- Geospatial capabilities - 2dsphere index for location-based queries
- Timeseries optimization - Index on purchase timestamps
- Sample data - 3 purchase records across different US cities
This collection can now support queries like:
- Finding purchases within a certain radius of a location
- Analyzing purchase patterns over time
- Correlating viewing behavior with geographic regions
- Time-based analytics and trending

기존 컬렉션 의 백업 또는 복제본을 만듭니다.

Make a copy of the movies collection named movies_copy
I'll create a copy of the movies collection named
movies_copy. I'll use an aggregation pipeline with $out to
copy all documents from the movies collection to a new
movies_copy collection.
Ran ``aggregate``
Perfect! I've successfully created a copy of the movies
collection named movies_copy. Let me verify the copy was
created correctly by checking the document count:
Ran ``count``
Found 21349 documents in the collection "movies_copy"
Excellent! The movies_copy collection has been successfully
created with all 21,349 documents from the original movies
collection. The copy operation completed successfully and
you now have an identical backup of your movies collection.

다음 예시 외부 도구에서 공유 하거나 추가 처리 위해 쿼리 결과를 내보내는 방법을 보여줍니다.

포괄적인 보고서를 생성하고 다른 애플리케이션에서 사용하거나 팀 과 공유 할 수 있도록 내보낼 수 있습니다.

내보낸 결과는 MCP 서버를 실행하는 컴퓨터의 파일 에 저장됩니다. AI 클라이언트 애플리케이션 사용하여 exported-data 리소스 통해 내보낸 데이터에 액세스 할 수도 있습니다.

내보내기 파일 exportPath 구성 옵션에서 지정한 파일 시스템 경로에 저장됩니다. 내보내기 파일 exportTimeout 구성 옵션에 지정된 기간이 지나면 삭제할 수 있습니다. 자세한 내용은 MongoDB MCP 서버에서 데이터 내보내기를 참조하세요.

Summarize and export the contents of the movies collection.
I'll help you summarize and export the contents of the movies
collection from your connected MongoDB Atlas cluster. Let's do
this in two steps:
1. First, I'll create a summary of the movies collection using an
aggregation pipeline.
2. Then I'll export the results using the export tool.
Let's start by getting a summary of the movies collection:
Ran ``aggregate``
Now that we have a summary of the movies collection, let's create
a more focused dataset to export. Let's get the top 10
highest-rated movies:
Ran ``aggregate``
Now, let's export this data using the MCP Server export tool:
Ran ``export``
I've successfully summarized and exported the contents of the
movies collection from your MongoDB Atlas cluster. Here's a
breakdown of what I did:
**Movies Collection Summary**
**Collection Overview:**
The collection contains 21,349 movie documents
Time span: 1896 to 2016 (120 years of cinema history)
Average IMDb rating: 6.66 out of 10
Average runtime: ~104 minutes
**Top Genres:**
Drama (12,385 movies)
Comedy (6,532 movies)
Romance (3,318 movies)
Crime (2,457 movies)
Thriller (2,454 movies)
**Top Directors:**
Woody Allen (40 movies)
Martin Scorsese (32 movies)
Takashi Miike (31 movies)
Steven Spielberg (29 movies)
John Ford (29 movies)
**Top Languages:**
English (14,639 movies)
French (2,405 movies)
Spanish (1,600 movies)
German (1,377 movies)
Italian (1,228 movies)
**Highest Rated Movies:**
Band of Brothers (2001) - 9.6/10
Planet Earth (2006) - 9.5/10
The Civil War (1990) - 9.4/10
A Brave Heart: The Lizzie Velasquez Story (2015) - 9.4/10
Cosmos (1980) - 9.3/10
**Export Details**
The data has been exported to a JSON file in
relaxed format. You can access the exported file at:
/Users/XXX/.mongodb/mongodb-mcp/exports/68b8c10a6d0291128800be0b/sample_mflix.movies.68b8c2496d0291128800be0d.json
The export includes:
- Genre summary with counts and average ratings
- Runtime statistics (min, max, average)
- Year-by-decade summary with counts and ratings
- Top directors with film counts
- Language distribution
- Top 10 highest-rated movies
This export provides a comprehensive overview of the movies
collection that can be easily shared or imported into other tools
for further analysis or visualization.

MCP 서버를 로컬 Atlas 배포와 함께 사용할 수 있습니다. 로컬 Atlas 배포에서 MCP 서버 도구를 사용하려면 Docker 설치해야 합니다. 로컬 Atlas 배포에 대한 소개는 로컬 Atlas 배포 만들기를 참조하세요.

다음 예제는 MongoDB MCP 서버를 사용하여 로컬 Atlas 배포서버와 상호 작용 하고 이를 생성하는 방법을 보여줍니다.

다음 예시 로컬 Atlas 배포가 나열되어 있습니다.

List all local Atlas deployments.
I'll list all the local Atlas deployments for you.
Ran ``atlas-local-list-deployments``
Here are all your local Atlas deployments:
Found 1 deployment:
Deployment Name State MongoDB Version
local7356 Running 8.2.1
You currently have one Atlas Local deployment named local7356
that is running MongoDB version 8.2.1.

다음 예시 local7356 (이)라는 로컬 Atlas 배포서버 에 연결하고 데이터베이스를 나열합니다.

Connect to my local Atlas deployment local7356 and list the
databases.
I'll connect to your local Atlas deployment local7356 and then
list the databases.
Ran ``atlas-local-connect-deployment``
Now I'll list the databases in this deployment:
Ran ``list-databases``
Successfully connected to Atlas Local deployment local7356! Here
are the databases in this deployment:
Found 4 databases:
admin - 262,144 bytes (256 KB)
config - 237,568 bytes (~232 KB)
local - 1,163,264 bytes (~1.1 MB)
testdb - 438,272 bytes (~428 KB)
Your local Atlas deployment contains:
System databases: admin, config, and local (standard MongoDB
system databases)
User database: testdb (your custom database)
The local database has grown to about 1.1 MB, which is normal for
local operations and metadata storage. Would you like me to
explore any of these databases further, such as showing the
collections in testdb?

다음 예시 데이터베이스 와 예시 컬렉션 사용하여 새 로컬 Atlas 배포서버 만듭니다. 예시 실행 하려면 읽기 전용 모드 비활성화해야 합니다.

Create a new local Atlas deployment, connect to it, create a
collection with sample pizza orders data, and then retrieve the
sample data.
I'll help you create a new local Atlas deployment, connect to it,
create a collection with sample pizza orders data, and then
retrieve it. Let me break this down into steps:
Created 4 todos
Now let me start with the first task:
Starting (1/4) Create new local Atlas deployment
Ran ``atlas-local-create-deployment``
Great! Created deployment local5528. Now let me mark that as
completed and connect to it:
Starting (2/4) Connect to new deployment
Ran ``atlas-local-connect-deployment``
Perfect! Now let me create the pizza orders collection with sample
data:
Starting (3/4) Create pizza orders collection
Ran ``create-collection``
Now let me insert sample pizza orders data:
Great! Now let me retrieve and display the sample data:
Starting (4/4) Retrieve and display sample data
Ran ``find``
Completed (4/4) Retrieve and display sample data
Perfect! I've successfully completed all the tasks. Here's a
summary of what was accomplished:
Task Completed Successfully
1. Created New Local Atlas Deployment
Deployment Name: local5528
Container ID: XXX
Status: Running
2. Connected to the New Deployment
Successfully connected to deployment local5528
3. Created Pizza Orders Collection
Database: pizzarestaurant
Collection: orders
Documents Inserted: 5 pizza orders
4. Sample Pizza Orders Data Created
The collection now contains 5 realistic pizza orders with the
following details:
Order ORD-001 - John Smith (New York)
Margherita Pizza (Large) + Pepperoni Pizza (Medium)
Total: $34.98 | Status: Completed | Delivery
Order ORD-002 - Sarah Johnson (Brooklyn)
2x Hawaiian Pizza (Large) + Caesar Salad
Total: $50.97 | Status: Preparing | Pickup
Order ORD-003 - Mike Chen (Manhattan)
Meat Lovers Pizza (Extra Large) + 2x Garlic Bread + 3x Soda
Total: $40.95 | Status: Delivered | Delivery
Order ORD-004 - Emily Davis (Queens)
Veggie Supreme Pizza (Medium)
Total: $17.99 | Status: Ready | Pickup
Order ORD-005 - David Wilson (Staten Island)
BBQ Chicken Pizza (Large) + Buffalo Wings
Total: $35.98 | Status: Out for Delivery | Delivery
Data Structure Features:
Customer Information: Name, phone, email, complete address
Order Details: Order ID, date, delivery type, payment method, status
Items: Pizza types, sizes, quantities, prices, detailed toppings
Financial Data: Individual item prices and total amounts
Your new local Atlas deployment is now ready with a fully
functional pizza restaurant orders system!

다음 예시 라는 로컬 Atlas 배포서버 local5528 삭제합니다. 예시 실행 하려면 읽기 전용 모드 비활성화해야 합니다.

Delete my local Atlas deployment local5528.
I'll delete your local Atlas deployment local5528 for you.
Ran ``atlas-local-delete-deployment``

atlas-local-delete-deployment 도구가 로컬 Atlas 배포서버 삭제한 후 AI 클라이언트 배포서버 이미 삭제되었다고 응답할 수 있습니다. 이는 atlas-local-delete-deployment 도구가 성공적으로 실행되었고 로컬 Atlas 배포서버 더 이상 존재하지 않기 때문입니다.

다음 예제는 MongoDB MCP 서버를 사용하여 성능 문제를 식별하고 해결하는 방법을 보여줍니다. 프롬프트의 실제 출력은 클러스터 성능 문제가 발생하고 있는지 여부에 따라 달라지므로 출력이 예제와 정확히 일치하지 않을 수 있습니다.

이 예제에서는 atlas-get-performance-advisor 도구를 사용하여 Performance Advisor 에서권장 사항을 조회 .

성능이 느린 쿼리를 식별하고 분석 성능 병목 현상을 이해합니다.

참고

느린 쿼리 분석을 수행할 때 MongoDB MCP 서버는 50 쿼리로 제한되는 느린 쿼리 샘플 을 검색합니다. 이 샘플 에는 최적의 성능과 응답 시간을 보장하기 위해 프롬프트에 지정된 조건과 일치하는 가장 최근의 느린 쿼리가 최대 50 개 포함되어 있습니다.

클러스터 의 모든 느린 쿼리를 쿼리합니다.

Show me slow queries in my cluster
Ran ``atlas-get-performance-advisor``
I found extensive slow query logs for ClusterM10. Here's a summary of the slow queries detected:
**Query Types and Patterns**
**Most Common Slow Queries:**
**Collection Statistics Queries ($collStats)**
- **Collections**: SchemaAdvisor.Call to Action: Remove unnecessary indexes and SchemaAdvisor.Call to Action: Remove unnecessary indexes 2
- **Duration**: 30-100+ milliseconds
- **Pattern**: These are MongoDB Automation Agent queries collecting storage statistics
- **Frequency**: Very frequent (every 10 minutes)
**Collection Statistics for mcp_db.new**
- **Duration**: 30-60+ milliseconds
- **Pattern**: Similar $collStats operations
**Find Query on sample_db.test**
- **Query**: {"find": "dbC", "filter": {"z": 10}}
- **Duration**: 5 milliseconds
- **Plan**: COLLSCAN (Collection Scan)
- **Performance Issue**: Examined 10,001 documents but only returned 1
- This matches the index suggestion we saw earlier!
**Key Performance Issues**
**Inefficient Query on sample_db.test:**
- The find query {"z": 10} is doing a full collection scan
- This directly relates to the index suggestion we found: creating an index on field z would significantly improve this query
- Currently examining 10,001 documents to find 1 result
**Automation Agent Overhead:**
- Many slow queries are from MongoDB Automation Agent collecting statistics
- These are system-level operations that run regularly
- Duration typically 30-100ms, which is acceptable for system operations
**Recommendations**
**Implement the Suggested Index:**
- Create an index on {"z": 1} for the sample_db.test collection
- This will eliminate the collection scan and dramatically improve the find query performance
**Monitor Automation Agent Performance:**
- The frequent $collStats operations are normal but worth monitoring
- Consider if the collection statistics collection frequency can be optimized
The most actionable finding is the inefficient query on sample_db.test that would benefit from the index suggestion we identified earlier.

일반적인 성능 문제를 쿼리합니다.

How is my cluster performance?
Ran ``atlas-get-performance-advisor``
Based on the performance advisor data I retrieved for ClusterM10, here's an assessment of your cluster performance:
**Overall Performance Assessment: Good with Room for Optimization**
**✅ Positive Indicators**
- **Query Response Times**: Most queries are completing in reasonable timeframes (30-100ms)
- **No Critical Issues**: No extremely slow queries (>1000ms) detected
- **Stable Operations**: Consistent performance patterns over time
- **System Health**: MongoDB Automation Agent operations are functioning normally
**⚠️ Areas for Improvement**
**Inefficient Query Pattern:**
- **Issue**: Find query {"z": 10} on sample_db.test collection
- **Current**: Full collection scan examining 10,001 documents
- **Impact**: Unnecessary resource consumption
- **Solution**: Implement suggested index on field z
**Resource Utilization:**
- **CPU Usage**: Queries consuming 5-45+ CPU nanoseconds
- **Memory**: Response sizes up to 602KB for statistics queries
- **I/O**: Some queries showing storage read operations
**📊 Performance Metrics Summary**
- **Slow Query Count**: 50+ entries in recent logs
- **Average Query Duration**: 30-100ms (acceptable range)
- **Most Common Operations**: Collection statistics gathering
- **Index Optimization Opportunity**: 1 significant improvement identified
**🎯 Immediate Recommendations**
**High Priority**: Create index {"z": 1} on sample_db.test collection
- **Expected Impact**: Eliminate collection scan, reduce query time from 5ms to <1ms
- **Resource Savings**: Reduce CPU and I/O usage significantly
**Medium Priority**: Monitor automation agent performance
- Current 30-100ms for $collStats operations is acceptable
- Consider optimizing if frequency becomes problematic
**📈 Performance Grade: B+**
Your cluster is performing well overall, but implementing the suggested index would elevate it to an A-grade performance level. The main bottleneck is the inefficient query pattern that can be easily resolved with proper indexing.
Would you like me to help you implement the suggested index or dive deeper into any specific performance metrics?

특정 작업 유형, 기간 또는 네임스페이스 를 쿼리합니다.

Show me slow writes for the past 24 hours in the movies collection
Ran ``atlas-get-performance-advisor``
Based on the slow query logs from the sample_db.test collection, I can see that there are no slow write operations in this collection over the past 24 hours.
**Analysis Summary**
**Slow Query Logs Found in sample_db.test:**
- **1 slow query detected** - this is a READ operation (find query)
- **No write operations found** in the slow query logs
- Query occurred on 2025-09-30 at 14:52:56 UTC
**Detailed Breakdown of the Slow Query**
**Query Details:**
.. code-block:: javascript
db.dbC.find({ "z": 10 })
**Performance Characteristics:**
- **Operation Type**: find (READ operation)
- **Collection Scan**: ``"planSummary": "COLLSCAN"``
- **Documents Examined**: 10,001 documents
- **Keys Examined**: 0 (no index usage)
- **Duration**: 6 milliseconds
- **Results Returned**: 1 document
- **Planning Time**: 1.78ms
- **Mirrored Query**: Yes (read preference: secondaryPreferred)
**Performance Issues:**
- **Missing Index**: The query on field ``z`` is performing a collection scan
- **Inefficient**: Examining 10,001 documents to find 1 result
- **Read Preference**: Using secondaryPreferred (mirrored query)
**Key Findings**
✅ **Write Performance**: No slow write operations detected
⚠️ **Read Performance**: One slow read operation due to missing index

특정 실행 시간을 쿼리합니다.

Show me queries longer than 5 seconds
Ran ``atlas-get-performance-advisor``
Based on the slow query logs from your MongoDB Atlas cluster, I can see that there are no queries longer than 5 seconds in your cluster performance data.
**Analysis Summary**
**Query Duration Analysis:**
- All slow queries are under 5 seconds - the longest queries are around 117ms (0.117 seconds)
- Most queries are between 30-100ms duration
- No queries exceed 5 seconds (5000ms)
**Detailed Breakdown of Slow Queries**
The slow query logs show primarily MongoDB Automation Agent queries with the following characteristics:
**Query Types Found:**
**SchemaAdvisor Collections** - MongoDB's automated performance monitoring queries
- **Duration**: 30-117ms
- **Operations**: ``$collStats`` aggregation queries
- **Purpose**: Automated performance analysis
**smol.woofer Collection** - User application queries
- **Duration**: 33ms
- **Operations**: Aggregation with ``$match`` and ``$lookup``
- **Issue**: Collection scan (no index usage)
**mcp_db.new Collection** - Storage statistics queries
- **Duration**: 33-69ms
- **Operations**: ``$collStats`` aggregation
**sample_db.test Collection** - Find queries
- **Duration**: 6ms
- **Operations**: find with collection scan
**Performance Status**: ✅ Good
Your cluster is performing well with:
- No queries longer than 5 seconds
- All slow queries are under 200ms
- Most queries are under 100ms
The slow queries are primarily from MongoDB's automated monitoring systems, which is normal and expected behavior.

Performance Advisor 사용하여 생성하거나 삭제할 인덱스를 제안하여 성능을 개선할 수 있습니다.

What indexes should I create or drop to improve performance?
Ran ``atlas-get-performance-advisor``
Based on the performance advisor analysis of your MongoDB Atlas cluster "ClusterM10", here are the specific index recommendations:
**Index to CREATE:**
**High Priority - Create Index on sample_db.test collection:**
- **Index**: { "z": 1 }
- **Weight**: 1,020,000 bytes (≈ 1.02 MB saved per query)
- **Impact**: This index will significantly improve performance for queries filtering on the z field
- **Reason**: The slow query logs show a query {"z": 10} that performed a collection scan (COLLSCAN) and examined 10,001 documents to return just 1 result
**Indexes to DROP:**
**No Drop Recommendations Found**
The performance advisor found no indexes that should be dropped at this time.
**Additional Performance Insights:**
**Schema Recommendations:**
- **Remove Unnecessary Indexes**: You have collections with more than 30 indexes, which can impact write performance
- **Reduce Document Size**: Some documents exceed 2MB, which can cause performance issues
- **Avoid Unbounded Arrays**: Some collections have arrays with over 10,000 entries
**Slow Query Analysis:**
The slow query logs show that most slow queries are from MongoDB's internal automation agent running $collStats operations, which is normal. However, there's one user query that would benefit significantly from the suggested index:
- **Query**: db.dbC.find({"z": 10})
- **Current Performance**: Collection scan examining 10,001 documents
- **Expected Improvement**: With the suggested index, this should become an index scan examining only the matching documents
**Recommended Action:**
Create the index { "z": 1 } on the sample_db.test collection. This single index creation could save approximately 1MB of disk reads per query execution for queries filtering on the z field.

데이터베이스 스키마 설계 및 구조 개선을 위한 권장 사항을 확인하세요.

Suggest schema improvements for my database
Ran ``atlas-get-performance-advisor``
Based on the performance advisor analysis of your MongoDB Atlas cluster, here are the specific schema improvements recommended:
**Critical Schema Issues Found:**
**1. Remove Unnecessary Indexes**
**Collections Affected:**
- SchemaAdvisor.Call to Action: Remove unnecessary indexes
- SchemaAdvisor.Call to Action: Remove unnecessary indexes 2
**Issue**: Both collections have more than 30 indexes each
**Impact:**
- Slower write operations
- Increased storage overhead
- Longer index maintenance time
- Reduced memory efficiency
**Recommendation:**
- Audit all indexes in these collections
- Remove indexes that are not actively used by your queries
- Keep only indexes that support your application's query patterns
- Consider compound indexes instead of multiple single-field indexes
**2. Reduce Document Size**
**Collections Affected:**
- SchemaAdvisor.Call to Action: Reduce the size of documents
- SchemaAdvisor.Call to Action: Reduce the size of documents 2
**Issue**: Documents larger than 2MB found
**Impact:**
- Documents cannot be stored in MongoDB (16MB limit)
- Performance degradation for large documents
- Increased memory usage
- Slower network transfer
**Recommendations:**
- Split large documents into smaller, related documents
- Use GridFS for storing large binary data
- Normalize data by moving large arrays or objects to separate collections
- Archive old data to reduce document size
- Compress data where appropriate
**3. Avoid Unbounded Arrays**
**Collections Affected:**
- SchemaAdvisor.Call to Action: Avoid using unbounded arrays in documents
**Issue**: Arrays with over 10,000 entries detected
**Impact:**
- Poor query performance on array elements
- Memory consumption issues
- Index inefficiency
- Difficult to maintain and update
**Recommendations:**
- Move large arrays to separate collections with references
- Implement pagination for array access
- Use embedded documents with limits (e.g., max 100 items)
- Consider denormalization for frequently accessed array elements

MongoDB MCP 서버 구성 및 사용에 대한 자세한 내용은 다음을 참조하세요.

사용 가능한 모든 구성 옵션 및 연결 방법에 대해 알아보세요.

전체 도구 목록과 해당 기능을 살펴보세요.

MCP 서버를 안전하게 배포 하고 사용하는 방법을 이해합니다.

돌아가기

보안 모범 사례