Associated indexes to the fields that are affected by a write operation will be updated on the fly.
The transformation between files and memory structure is managed by wired tiger engine and is getting persistent by a periodic checkpoint to files from cache.
For this reason maintain of many indexes may slow writes and require more memory for each write.
The tradeoff between indexes for queries and too many indexes is crucial in performance tuning
Thank you very much for taking the time to explain to me the steps!
Is it safe to assume that wired tiger cache all the BSON documents when MongoDB is started to initialize or do MonogoDB cache only document with indexes?
and changes made to the cached data effect the BSON documents later on disk to be persistent?
If the working set is yet in memory it will have to be fetched from disk.
The ideal performance for your primary is if all your working set can fit into 80% of your Wired Tiger cache. If that is not possible due to size limits consider trying to fit at least the indexes in those 80% as this will mean the disk Access be minimal and direct.
Example a 32GB server will be by default with 16GB Wired Tiger cache , 80% of the cache will be ~13GB …
MongoDB caches pages of WiredTiger the amount if documents that could fit there depands on the size of documents.