Unique index creation in big collection forces mongod process kill

We are experiencing an odd behaviour in our MongoDB instances when trying to create unique indexes if defined fields have some duplicated values. When the collection does not have much documents (for example, just ten thousand of documents), if we try to define in Compass an unique index on a field with duplicated values we get the following expected response:
Index build failed: guid: Collection db.Col ( guid ) :: caused by :: E11000 duplicate key error collection: db.Col index: fieldXXX_1 dup key: { fieldXXX: 1 }
But if the collection has a very large number of documents (for example, ten million of documents) then after starting index creation in Compass, mongod instance starts to consume a huge number of disk I/O operations, memory and CPU and the mongod process ends up killed by the OOM killer.
We already know about the possibility of defining partial indexes to prevent these cases, but we would like to know what could be the reason of this behaviour, and how could we avoid by menas of some kind of change in mongod process configuration or similar, that any developer could end up killing the instance by mistake.

Any advice would be really appreciated.