Java Reactive Driver performances

I’m trying to switch from the blocking java driver to the new driver with the reactive interface, but I’m facing some serious performance bottleneck.
While migrating our application to spring reactor (using spring data mongodb), I noticed poor performances that could be more than half the existing, so I ran some tests and ended-up with the following: github/Tiller/mongo-perfs (sorry, I can only put 2 links in my post. Direct link in the table below)

Setting aside the spring-data overhead, just by focusing on the blocking driver result vs the reactive driver result, we see a ~40% decrease in performance.

Test Mean Time (s) Diff Base Driver Diff Base Reactive Driver
Java Driver (testRaw) 3.320 :green_circle: +0%
Java Reactive Driver (testReactiveRaw) 4.627 :red_circle: +39%

I ran the same test on a different setup today, and ended-up with a similar result, ~14.5s on the blocking driver and ~20.5s on the reactive driver.

Is my testing flawed in anyway? Or does the reactive driver just come with this performance hit when doing large amount of queries?


Hi @Michael_Longo,

I’m trying to repro your results, but unfortunately when I run ./gradlew test in either project, no tests seem to run. Any idea what the issue might be? I’ve never seen this before. How did you execute the tests?


Thanks @Jeffrey_Yemin, you’re right, as I ran my project into my IDE I didn’t think about running it through gradle.

I’ve updated the projects and it should now work.

./gradlew test

You can find the timing of each tests in the HTML report in build/reports/tests/test/index.html

I did a little digging into your benchmarks, and there is a significant difference between the two. The synchronous benchmark spawns 12 threads and executes 300K operations across those threads. The reactive benchmark, in contrast, doesn’t attempt to limit concurrency at all. Rather, it executes all 300K operations concurrently, relying on the driver to limit concurrency at a lower level due to the max size of the connection pool (which is 100 by default). But even with the driver hitting the breaks in the connection pool, spawning 300K operations concurrently is going to put a lot of memory pressure on the garbage collector (in fact, when I run the benchmarks with default JVM settings, I run out of memory).

Can you try changing your benchmark to provide the same concurrency limits in the reactive benchmark? This can easily be done by creating a Semaphore with NB_THREADS permits. When I tried that, the benchmarks were much closer to each other. Even with that change, the reactive benchmark was still a bit slower, but in my experience that’s in line with expectations.

Let me know how it goes.



Thanks for taking a look. I added some test cases and re-ran the whole thing:
Using T3.XLarge instance on AWS with local inmemory mongodb, 10 runs of each test with 100k updates. You can see the results here:

I removed the slowest & fastest run of each tests and ended-up with:

testRawWrappedWithConcurrency 7.125000
testRaw 6.250000
testReactiveRaw2 7.875000
testReactiveRaw3 9.500000
testReactiveRaw4 9.875000
testReactiveRaw5 10.625000
testReactiveRaw 9.125000

As you can see, the semaphore did not do much for me (testReactiveRaw4 / testReactiveRaw5). Maybe you implemented it in a different way?

With testReactiveRaw2 I managed to lower the total time, but it is still 26% slower. Is this still with-in expectations?

Releasing the semaphore in the onSubscribe method won’t have the desired effect. Can you try releasing it in the onComplete method of the Subscriber instead?


Sorry for being stupid… Obviously the semaphore wasn’t helping at all with what I did…

I fixed it and rerun it : Test results - MongoPerfTest

However, I have very similar results:

testRaw 6,75 0,00 %
testRawWrappedWithConcurrency 7,875 16,67 %
testReactiveRaw4 8,375 24,07 %
testReactiveRaw2 8,75 29,63 %
testReactiveRaw 9,375 38,89 %
testReactiveRaw3 9,625 42,59 %
testReactiveRaw5 9,75 44,44 %