Mongodb interface for high speed updates

Are there any examples with source code for high speed (at least 10,000 read/write of record/s) mongodb read/update of a single record at a time ?
Alternatively where could I look in the mongodb server code for a way to say inject a customised put/get record for example with the “wired tiger” storage system ?
Thank you for any hint where to start from.

1 Like

Hi @Guglielmo_Fanini,

Please provide some more details to help us understand your use case.

For example:

  • Are you planning for 10,000 reads, writes, or both per second?
  • You mention single record at a time – is that 10,000 writes/second to a single document or in aggregate?
  • What does your data model look like?
  • What type of deployment are you planning (standalone, replica set, or sharded cluster)?

Before considering server modifications, it would be best to understand your use case. There may be more recommendable approaches for data modelling or application architecture.

If you are definitely just looking for a key/value API without a distributed deployment, it may be more straightforward to use the WiredTiger library directly.

Regards,
Stennie

1 Like

I’d need to update read and write concurrently about 10,000 different records/s, currently I am using a custom database, on windows, and since I see it accesses the file system directly, I was wondering if I could somehow access the files in mongodbserverpath\server\4.0\data*.wt directly, possibly without even need of the mongodb server running or concurrently I’d guess via mongod.lock file ?

Hi @Guglielmo_Fanini,

If you only want to work with WiredTiger as a key/value database and do not desire any distributed database features, you can use the WiredTiger library directly as a standalone database library. If you take this approach, it would be independent of any integration with the MongoDB server and the only applicable documentation would be the WiredTiger Developer Site I referenced in my earlier response. You do not need a MongoDB server installed to use the WiredTiger database library, and you would be working with WiredTiger’s API including schema and data types. Any data files would be created via your own application.

If you want the networking, querying, and distributed database features that MongoDB adds, you would instead access your data through a MongoDB deployment and focus on data modelling and capacity planning to support your use case and desired performance.

Regards,
Stennie

1 Like

I resorted to cacheing reads on my client side, similarly to a certain “memcached” for mysql, I read about, whereas for write I am using mongo C lib bulk writes, it seems to be able to do about 50,000 deletes /s, because I think the impact of interpreting the command is minimal ? are there any “stored procedures” available server side to speed things up somehow, there is a built in java script interpreter I gather ?