I have a massive time-series collection currently on a standalone MongoDB 8 installation. It’s running via Docker on a VPS with 48 vCPU cores and 192 GB of RAM.
When I create an index on the collection, it takes days to complete but it’s barely touching the CPUs, and it seems like increasing maxIndexBuildMemoryUsageMegabytes
to 32 or 64 GB has no effect and isn’t even changing my system RAM usage.
Is there really no way to make the index build process faster by using more of the system resources? For example, with mongoimport
and mongorestore
, we can set the number of insertion workers, effectively multithreaading the migration, which then leverages more of the available compute power.
Are we stuck with index builds being single-threaded?
(Sidenote, please let me know if there’s something else I should change about my config to make increase maxIndexBuildMemoryUsageMegabytes
from 200 MB to 32 or 64 GB work better.)
Thanks in advance!