Hello Everyone ,
I’m currently using C# driver (v2.17.1 ) to connect to local Mongo Database and then loop through every document and update a field inside the document.
One iteration might take 2 to 5 seconds .
This works up until 1.4k to 1.5k documents .
Later there is an exception which is like below :
MongoDB.Driver.MongoCursorNotFoundException: Cursor 4299221987179083891 not found on server localhost:27017 using connection 1465.
at MongoDB.Driver.Core.Operations.AsyncCursor1.ExecuteGetMoreCommand(IChannelHandle channel, CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.AsyncCursor
1.GetNextBatch(CancellationToken cancellationToken)
at MongoDB.Driver.Core.Operations.AsyncCursor1.MoveNext(CancellationToken cancellationToken) at MongoDB.Driver.Core.Operations.AsyncCursorEnumerator
1.MoveNext()
My C# code looks like below :
string connectionString = @“mongodb://localhost:27017/”;
MongoClient dbClient = new MongoClient(connectionString );
string dbName = “mongodbname”;
string collectionName = “mongodbcollectionname”;
var db_sample = dbClient.GetDatabase(dbName);
var collection = db_sample.GetCollection(collectionName);
foreach (DataModel cdm in collection.AsQueryable())
{
//Iteration and updation of documents .
}
Any help is much appreciated !
Thanks in advance.
Hi, @bharath_narayan,
Welcome to the MongoDB Community Forums.
When a query fetches documents from MongoDB those documents are returned in 16MB batches. The initial query returns the first 16MB batch along with a cursorId
, which can be further iterated by calling getMore
with that cursorId
. Fetching additional results is handled automatically by the driver by internally calling getMore
when the current bath is exhausted.
By default MongoDB terminates idle cursors after 10 minutes. Thus if it takes you more than 10 minutes to process a 16MB batch, the getMore
will fail because the idle cursor has been killed.
There are a few possible solutions:
- The easiest solution is to retrieve all the data prior to processing the results. This is only feasible if the retrieved data can all fit into memory. Simply adding
ToList()
(e.g. collection.AsQueryable().ToList()
) will retrieve all results into memory prior to processing each one.
- Specify a smaller batch size via
AsQueryable(new AggregationOptions { BatchSize = N })
where N is the number of documents to pull back at a time. By reducing the number of documents per batch, you can force the driver to call getMore
prior to the 10-minute cursor timeout.
- Page through the results explicitly by implementing pagination supported by a sort. (e.g.
collection.AsQueryable().Where(x => x.SortField > lastSeen).OrderBy(x => x.SortField).Take(10)
)
- Adjust the server parameter
cursorTimeoutMillis
to a higher value. Not recommended, but a potential quick fix if you’re trying to do a one-time run on your local machine.
Hope this helps.
Sincerely,
James