- Frequently Asked Questions >
- FAQ: MongoDB Diagnostics
FAQ: MongoDB Diagnostics¶
On this page
This document provides answers to common diagnostic questions and issues.
If you don’t find the answer you’re looking for, check the complete list of FAQs or post your question to the MongoDB User Mailing List.
Frequently Asked Questions:
Where can I find information about a mongod
process that stopped running unexpectedly?¶
If mongod
shuts down unexpectedly on a UNIX or UNIX-based
platform, and if mongod
fails to log a shutdown or error
message, then check your system logs for messages pertaining to MongoDB.
For example, for logs located in /var/log/messages
, use the
following commands:
Does TCP keepalive
time affect sharded clusters and replica sets?¶
If you experience socket errors between members of a sharded cluster
or replica set, that do not have other reasonable causes, check the
TCP keep alive value, which Linux systems store as the
tcp_keepalive_time
value. A common keep alive period is 7200
seconds (2 hours); however, different distributions and OS X may have
different settings. For MongoDB, you will have better experiences with
shorter keepalive periods, on the order of 300
seconds (five minutes).
On Linux systems you can use the following operation to check the
value of tcp_keepalive_time
:
You can change the tcp_keepalive_time
value with the following
operation:
The new tcp_keepalive_time
value takes effect without requiring
you to restart the mongod
or mongos
servers. When you reboot or restart your system you will need to set
the new tcp_keepalive_time
value, or see your operating system’s
documentation for setting the TCP keepalive value persistently.
For OS X systems, issue the following command to view the keep alive setting:
To set a shorter keep alive period use the following invocation:
If your replica set or sharded cluster experiences keepalive-related
issues, you must alter the tcp_keepalive_time
value on all machines
hosting MongoDB processes. This includes all machines hosting
mongos
or mongod
servers.
Windows users should consider the Windows Server Technet Article on KeepAliveTime configuration for more information on setting keep alive for MongoDB deployments on Windows systems.
What tools are available for monitoring MongoDB?¶
The MongoDB Management Services <http://mms.mongodb.com> includes monitoring. MMS Monitoring is a free, hosted services for monitoring MongoDB deployments. A full list of third-party tools is available as part of the Monitoring Database Systems documentation. Also consider the MMS Documentation.
Memory Diagnostics¶
Do I need to configure swap space?¶
Always configure systems to have swap space. Without swap, your system may not be reliant in some situations with extreme memory constraints, memory leaks, or multiple programs using the same memory. Think of the swap space as something like a steam release valve that allows the system to release extra pressure without affecting the overall functioning of the system.
Nevertheless, systems running MongoDB do not need swap for routine
operation. Database files are memory-mapped and should constitute most of your
MongoDB memory use. Therefore, it is unlikely that mongod
will ever use any swap space in normal operation. The operating system
will release memory from the memory mapped files without needing
swap and MongoDB can write data to the data files without needing the swap
system.
Must my working set size fit RAM?¶
Your working set should stay in memory to achieve good performance. Otherwise many random disk IO’s will occur, and unless you are using SSD, this can be quite slow.
One area to watch specifically in managing the size of your working set is index access patterns. If you are inserting into indexes at random locations (as would happen with id’s that are randomly generated by hashes), you will continually be updating the whole index. If instead you are able to create your id’s in approximately ascending order (for example, day concatenated with a random id), all the updates will occur at the right side of the b-tree and the working set size for index pages will be much smaller.
It is fine if databases and thus virtual size are much larger than RAM.
How do I calculate how much RAM I need for my application?¶
The amount of RAM you need depends on several factors, including but not limited to:
- The relationship between database storage and working set.
- The operating system’s cache strategy for LRU (Least Recently Used)
- The impact of journaling
- The number or rate of page faults and other MMS gauges to detect when you need more RAM
MongoDB defers to the operating system when loading data into memory from disk. It simply memory maps all its data files and relies on the operating system to cache data. The OS typically evicts the least-recently-used data from RAM when it runs low on memory. For example if clients access indexes more frequently than documents, then indexes will more likely stay in RAM, but it depends on your particular usage.
To calculate how much RAM you need, you must calculate your working set size, or the portion of your data that clients use most often. This depends on your access patterns, what indexes you have, and the size of your documents.
If page faults are infrequent, your working set fits in RAM. If fault rates rise higher than that, you risk performance degradation. This is less critical with SSD drives than with spinning disks.
How do I read memory statistics in the UNIX top
command¶
Because mongod
uses memory-mapped files, the memory statistics in top
require interpretation in a special way. On a large database, VSIZE
(virtual bytes) tends to be the size of the entire database. If the
mongod
doesn’t have other processes running, RSIZE
(resident bytes) is the total memory of the machine, as this counts
file system cache contents.
For Linux systems, use the vmstat
command to help determine how
the system uses memory. On OS X systems use vm_stat
.
Sharded Cluster Diagnostics¶
The two most important factors in maintaining a successful sharded cluster are:
You can prevent most issues encountered with sharding by ensuring that you choose the best possible shard key for your deployment and ensure that you are always adding additional capacity to your cluster well before the current resources become saturated. Continue reading for specific issues you may encounter in a production environment.
In a new sharded cluster, why does all data remains on one shard?¶
Your cluster must have sufficient data for sharding to make sense. Sharding works by migrating chunks between the shards until each shard has roughly the same number of chunks.
The default chunk size is 64 megabytes. MongoDB will not begin
migrations until the imbalance of chunks in the cluster exceeds the
migration threshold. While the
default chunk size is configurable with the chunkSize
setting, these behaviors help prevent unnecessary chunk migrations,
which can degrade the performance of your cluster as a whole.
If you have just deployed a sharded cluster, make sure that you have enough data to make sharding effective. If you do not have sufficient data to create more than eight 64 megabyte chunks, then all data will remain on one shard. Either lower the chunk size setting, or add more data to the cluster.
As a related problem, the system will split chunks only on inserts or updates, which means that if you configure sharding and do not continue to issue insert and update operations, the database will not create any chunks. You can either wait until your application inserts data or split chunks manually.
Finally, if your shard key has a low cardinality, MongoDB may not be able to create sufficient splits among the data.
Why would one shard receive a disproportion amount of traffic in a sharded cluster?¶
In some situations, a single shard or a subset of the cluster will receive a disproportionate portion of the traffic and workload. In almost all cases this is the result of a shard key that does not effectively allow write scaling.
It’s also possible that you have “hot chunks.” In this case, you may be able to solve the problem by splitting and then migrating parts of these chunks.
In the worst case, you may have to consider re-sharding your data and choosing a different shard key to correct this pattern.
What can prevent a sharded cluster from balancing?¶
If you have just deployed your sharded cluster, you may want to consider the troubleshooting suggestions for a new cluster where data remains on a single shard.
If the cluster was initially balanced, but later developed an uneven distribution of data, consider the following possible causes:
- You have deleted or removed a significant amount of data from the cluster. If you have added additional data, it may have a different distribution with regards to its shard key.
- Your shard key has low cardinality and MongoDB cannot split the chunks any further.
- Your data set is growing faster than the balancer can distribute
data around the cluster. This is uncommon and
typically is the result of:
- a balancing window that is too short, given the rate of data growth.
- an uneven distribution of write operations that requires more data migration. You may have to choose a different shard key to resolve this issue.
- poor network connectivity between shards, which may lead to chunk migrations that take too long to complete. Investigate your network configuration and interconnections between shards.
Why do chunk migrations affect sharded cluster performance?¶
If migrations impact your cluster or application’s performance, consider the following options, depending on the nature of the impact:
- If migrations only interrupt your clusters sporadically, you can limit the balancing window to prevent balancing activity during peak hours. Ensure that there is enough time remaining to keep the data from becoming out of balance again.
- If the balancer is always migrating chunks to the detriment of
overall cluster performance:
- You may want to attempt decreasing the chunk size to limit the size of the migration.
- Your cluster may be over capacity, and you may want to attempt to add one or two shards to the cluster to distribute load.
It’s also possible that your shard key causes your application to direct all writes to a single shard. This kind of activity pattern can require the balancer to migrate most data soon after writing it. Consider redeploying your cluster with a shard key that provides better write scaling.