When data is committed to disk via a checkpoint, the WiredTiger storage engine needs to write the latest version of data before freeing unused pages associated with the previous checkpoint. This is described in Snapshots and Checkpoints in the MongoDB server documentation:
During the write of a new checkpoint, the previous checkpoint is still valid. As such, even if MongoDB terminates or encounters an error while writing a new checkpoint, upon restart, MongoDB can recover from the last valid checkpoint.
The new checkpoint becomes accessible and permanent when WiredTiger’s metadata table is atomically updated to reference the new checkpoint. Once the new checkpoint is accessible, WiredTiger frees pages from the old checkpoints.
Previously allocated space will be reused where possible per the Storage FAQ:
The WiredTiger storage engine maintains lists of empty records in data files as it deletes documents. This space can be reused by WiredTiger, but will not be returned to the operating system unless under very specific circumstances.
I understood it at once.
Can I ask you one more question?
I gave 6 million update commands to mongodb containing 20 million (23GB) of data (about 1KB per document).
But this 23GB became 57GB.
As far as I know, It need 6GB of extra space. But this was concluded that it needed more than twice the space.