90 seconds for 250 records does not seem normal to me.
what is standard bench mark for reading 1000 records from mongodb server using pymongo 3.9
Assuming these are small documents (<1KB) they can all be returned in a single network roundtrip to the server and the total time should be roughly equal to the network latency.
Although in general the answer depends on a number of factors:
- What server are you running against? MongoDB Atlas? If so, what size cluster: Free tier M0, M5, M20, etc?
- What is the network latency from the application to the server? 10ms? 500ms?
- What is the average size of the returned documents? ~100 bytes, ~1KB, ~1MB?
- How long does the server take to satisfy the query? Perhaps the query can be sped up with an index?
- Was pymongo installed with the C extensions? These speed up pymongo’s BSON encoding/decoding. You can check with:
python3 -c 'import bson;print(bson.has_c())'
A final note, you can use cProfile to determine where the CPU time (not I/O time) is being spent:
- The Python Profilers — Python 3.12.2 documentation
python3 -m cProfile -s time myscript.py