Spark connector for mongodb in python

I have more than 44k records in a collection. I need to load those records in batch(ie batchsize of 10k).
I found that Spark connector
input configuration

How to get 44k records in a batch of 10k using spark connector. Is it fine if simply batchsize is mentioned in the readconfig?