Hello,
I have a 6.4M documents database with an average size of 8kB.
A document has a schema like this :
{"group_ulid": str, "position": int, "..."}
I have 15 other columns that are :
- dict with 5-10 keys
- small list (max 5 elements) of dict with 5-10 keys
I want to retrieve all documents of a given group_ulid (~5000-10000 documents) but it is slow (~1.5 seconds). I’m using pymongo :
res = collection.find({"group_ulid": "..."})
res = list(res)
I am running mongo using Docker on a 16 GB and 2 vCPU instance.
I have an index on group_ulid, ascendant. The index is like 30MB.
Are there some ways to make it faster ? Is this a normal behavior ?
Thanks