Timeseries: Poor Performance on Indexed TimeField


I created two collections. One is a timeseries, the other is not.
I added an index to the ts field to both and then wrote 1’000’000 random documents to both.
The code to do that is as follows:

test> db.createCollection("test1", {timeseries: {timeField: "ts", metaField: "md", granularity: "seconds"}})
{ ok: 1 }
test> db.createCollection("test2")
{ ok: 1 }
test> db.test1.createIndex({ts: -1})
test> db.test2.createIndex({ts: -1})
test> for (var i = 1; i <= 100; i++) {
...     const docs = [...Array(10000)].map(_ => ({
.....         ts: new Date(new Date().getTime() - (Math.random() * 172800000)),
.....         md: "",
.....         value: Math.random() * 1000000
.....     }));
...     db.test1.insertMany(docs);
...     db.test2.insertMany(docs);
... }
test> db.test1.countDocuments()
test> db.test2.countDocuments()
test> db.test1.find({}).sort({ts: -1}).limit(1)
    ts: ISODate("2021-12-20T18:45:43.956Z"),
    md: '',
    _id: ObjectId("61c0cf58435319318616399e"),
    value: 933482.8127467203
test> db.test2.find({}).sort({ts: -1}).limit(1)
    _id: ObjectId("61c0cf6143531931861660ae"),
    ts: ISODate("2021-12-20T18:45:43.956Z"),
    md: '',
    value: 933482.8127467203

Querying the timeseries collection (test1) is MUCH slower than querying the regular collection (test2).

test1 query explain and stats: test> db.test1.find({}).sort({ts: -1}).limit(1).explain(){ explainVersion: - Pastebin.com
test2 query explain and stats: test> db.test2.find({}).sort({ts: -1}).limit(1).explain(){ explainVersion: - Pastebin.com

Is there something wrong with my query? Or the way I’m structuring my data?

I’ve tried using .hint() to force the timeseries collection to use the index but the performance is the same.

Would love any ideas or suggestions.

Thank you

1 Like

We have noticed the exact same issue. Currently on MongoDb 5.2.0.

Sorting by timestamp on a timeseries collection does not use the index.
We made sure that index order and sort order are the same.

Is there any update on this?

The only workaround we found so far is using a $match on the timestamp to limit the amount of documets as much as possible - but that is not possible in all cases.

A statement from MongoDB on why this is, and how to work around this would be great!

Same here.
Any update about this?
In my use case i want to calculate the ranges of time-contiguous data and i have to sort the data before i do my calculations. Also for me limiting the documents gives better performances but that is not a real solution for me.


one thing that I found out that improves performance is an index in DESCENDING order on the timestamp field. At least on MongoDB 6.0.0 this has different behavior. A compound index on meta field + timestamp field does not provide the same performance. It will still load and unpack all the documents that match the query criteria and then sort and limit.
You might try to enforce the usage of the index by using the .hint() function. https://www.mongodb.com/docs/manual/reference/method/cursor.hint/


Hi any updates on this issue?

I’m receiving also the alarm from mongodb atlas because the retrieved docuemnt overcome the limit of 1000.

Right now, I’m using 5.0.12 version on the cloud.

After an eternity of research on getting a fast view on latest time series document of my collection
I found out this was efficient with nodejs driver:

var collection = db.collection(datasource.collectionName); 
var latest = await collection.findOne({iSensor:0}, { sort: {$natural:-1} });

Or with mongosh