Oplog size and Kubernetes Persitent Volume

HI all,

I deployed a mongoDB statefulset on a Kubernetes cluster and I associated a persistent volume of 20GB on each instance of my database. I expected a Oplog size of 1GB (as default is 5% of free disk space), but I observed the Oplog Size is equal to 25GB which is equal to 5% of the complete free disk space (500GB). Do you know if there is a solution to use the PV size instead of the total partition size for the Oplog size caculation? If not I must manually modified the oplog size to be coherent with my PV size (or increase the PV size).

Hi @Jacques and welcome to the MongoDB community!!

If the dbPath for the oplog resides in a persistentVolume, then the database size including the oplog should be bound by that volume, so I believe the oplog size should take into account the volume’s size.
Also, Starting in MongoDB 4.0, unlike other capped collections, the oplog can grow past its configured size limit to avoid deleting the majority commit point.

By default it should do this, but if it doesn’t, could you please provide more details:

  1. The deployment type on the kubernetes cluster.
  2. The MongoDB version
  3. The yaml files for the deployment.
  4. The output for db.getReplicationInfo() for the deployment.

Let us know if you have any further queries.

Best Regards

1 Like