How to build debug or release on windows

I followed these instructions and they work with python 3.7 and visual studio 19

it generated a 40 mb mongod.exe which I suspect is built for debug microsoft runtimes, how could I specify to build as release that the installed pre built mongod.exe is about 18 mb size ?

are there any switches like

python3 buildscripts/scons.py install-mongod -release/debug

thank you for any hint

You don’t say which version of MongoDB you are trying to build, so I’m going to assume 5.0. On that branch, whether or not you get the debug runtime is controlled by the --dbg flag:

The default is --dbg=off, so unless you explicitly say --dbg[=on] you will get the release runtime. I don’t have a quick explanation for why you are seeing such a different binary size. Could you please provide:

  • The version of MongoDB you are trying to build
  • The version of Visual Studio you have installed, and the version of Windows you are building on.
  • The exact SCons invocation you used to build
  • The size of the produced mongod binary from your build
  • A link to where you obtained the version of mongod against which you are comparing

Thanks,
Andrew

2 Likes

Hmm it should be release because I installed pre build mongo community version 5 mongod.exe is 45 mb, so it should be ok.
I did :
git clone GitHub - mongodb/mongo: The MongoDB Database
cd mongo
git checkout r5.0.2
scons
I will try with scons --dbg=on to see size and update file size here

do you know by chance any tool to get a quick ballpark estimate of mongodb performance on the current computer ? I see there used to be a “mongoperf” to check disk i/o in order to find out whether any performance hit from running inside vmware host, also anything that could point out possible misconfigurations like I read for example in linux large pages enabled could be a speed hit, for both windows/linux ?

Unless you actually want a debug build (which is slower and has additional developer facing behavior), I don’t see any point doing a --dbg=on build, so I’d recommend not doing it.

Regarding your question on performance, unfortunately I don’t have any guidance for you. Perhaps others on the board will.

why did you remove mongoperf in 2018

commit 51f7e327acd1e32cd210c32917b9ed522fb875cd
Author: Andrew Morrow acm@mongodb.com
Date: Fri Apr 13 11:19:05 2018 -0400

SERVER-34419 Remove mongoperf

what benchmarking tool could I use to test very roughly query and update speeds ?

It wasn’t particularly my decision to remove it. I wrote the commit because I was the best positioned to make the mechanical changes required for its removal. We decided to remove mongoperf because:

  • It had a hard dependency on the support code for the MMAPv1 storage engine, which we were in the process of removing after its deprecation cycle had completed, and
  • We felt that there other IO benchmarking utilities that could be used for the same purpose.

I’m again not really the right person to field questions about IO benchmarking, so hopefully someone with more expertise in that area can join in and provide you with some recommendations.

The real time database I’m currently using is of the order of about 15 million query/updates/s, it is in memory, and periodically commits changes to file, does the mongodb in memory storage have this concept ? if not do you have any idea where I could start looking in the code for possibly committing to disk with say wired tiger or whatever from the in memory storage ?

possibly, I found a very cheap way to increase socket buffering to the client, in case anyone knowledgeable on IO can confirm if a valid way of increasing commands sent/received per second to mongodb :slight_smile:
\src\mongo\transport\session_asio.h
_socket.set_option(asio::ip::tcp::no_delay(true));

_socket.set_option(asio::socket_base::keep_alive(true)); <<<— current code
_socket.set_option(asio::socket_base::send_buffer_size(1<<23));// 8 mb o.s. tx buffer
_socket.set_option(asio::socket_base::receive_buffer_size(1 << 23)); // 8 mb o.s. rx buffer

Hi @Guglielmo_Fanini,

As @Andrew_Morrow mentioned, mongoperf was specific to the original MMAPv1 storage engine which was deprecated in MongoDB 4.0 and removed in MongoDB 4.2. mongoperf ultimately only provided an estimated upper bound on I/O performance – actual outcomes are highly dependent on factors including your workload, deployment topology, and software versions. The most useful aspect of this tool was confirming there were no egregious MMAP performance issues with your environment, but it wasn’t an indicative measure of query performance.

There are generic and more comprehensive tools like fio for general I/O comparisons, but it is difficult to extrapolate these outcomes to database workloads that go through a much more complex execution path.

You can use load testing tools like JMeter to test concurrency and performance with your application stack and deployment configuration. This category of tools is not specific to MongoDB.

There are also Application Performance Monitoring (APM) platforms like New Relic and DataDog that facilitate full stack monitoring and can be helpful for visualising and correlating issues in your application code, database deployment, and frontend experience.

The default WiredTiger storage engine applies changes in memory with periodic durability via journaling and checkpoints, so this is a likely starting point for your use case.

The In-Memory Storage Engine available in MongoDB Enterprise intentionally avoids disk I/O and requires that all data and indexes fit in memory. The use case for this storage engine is workloads requiring more predictable latency. If you want to add persistence you can deploy a replica set with In-Memory members as well as a hidden WiredTiger member configured to write to disk per In-Memory Deployment Architectures.

This is going to be very workload dependent, but I would start by profiling your workload to understand where your actual bottlenecks are. Starting with server source code modifications is generally premature optimisation. I would try improving concurrency rather than increasing the memory usage per socket.

In one of your previous discussion topics (Mongodb interface for high speed updates) you mentioned wanting to write directly to data files. If you use WiredTiger directly as a local database library you can avoid any overhead of network stack or higher level query abstractions. I feel like that path is better aligned with the low level approach you are trying to take.

Regards,
Stennie

compiling mongod with gcc 8 and python 3.7 on debian 10
python buildscripts/scons.py install-mongod
generates a 4 gb (?) executable mongod file ?
how to compile a minimal size mongod on linux ?
I need to be able to run on debian 9 though, it complains about glibc version though, is it possible to force it to link compatible with previous gcc 6 ?

it seems you need to “strip” debug information from the executable, it seems to do it automatically building for windows

Yes, on Windows it happens automatically. On other platforms, you can build with --separate-debug which will use objcopy to move the debug information into separate files. You can install those files by building the -debug targets, which are enabled by --separate-debug. For instance, if you normally build scons ... install-core, you can get what you want here by building scons ... --separate-debug install-core{,-debug}. Then you can hang on to the debug symbols in case you need them later.

Regarding your question on debian, you need to build on the system where you want to run to avoid issues like those regarding the glibc version.

1 Like

This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.