What sort of deployment do you have (standalone, replica set, or sharded cluster)? If you have a replica set or sharded cluster, can you describe your the roles of your instances in terms of Primary, Secondary, and Arbiter and also confirm whether you are seeing the LAS growth on the Primary, Secondaries, or both?
WiredTigerLAS.wt is an overflow buffer for data that does not fit in the WiredTiger cache but cannot be persisted to the data files yet (analogous to “swap” if you run out of system memory). This file should be removed on restart by
mongod as it is not useful without the context of the in-memory WiredTiger cache which is freed when
mongod is restarted.
If you are seeing unbounded growth of
WiredTigerLAS.wt, likely causes are a deployment that is severely underprovisioned for the current workload, a replica set configuration with significant lag, or a replica set deployment including an arbiter with a secondary unavailable.
The last scenario is highlighted in the documentation: Read Concern
majority and Three-Member PSA and as a startup warning in recent versions of MongoDB (3.6.10+, 4.0.5+, 4.2.0+).
maxCacheOverflowFileSizeGB configuration option mentioned by @chris will prevent your cache overflow from growing unbounded, but is not a fix for the underlying problem.
Please provide additional details on your deployment so we can try to identify the issue.