MongoDB suddenly crashed

I have a MongoDB version 4.2 on a docker container.
Yesterday night Mongo suddenly crashed.
The log shows a “getMore” operation, and than a server restart:

2023-08-27T04:38:40.317+0300 I COMMAND [conn33] command octdb.device_audit_hourly command: getMore { getMore: 5495031475288944971, collection: “device_audit_hourly”, $db: “octdb”, $2023-08-27T07:05:42.398+0300 I CONTROL [main] ***** SERVER RESTARTED *****
2023-08-27T07:05:42.404+0300 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols ‘none’
2023-08-27T07:05:42.660+0300 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27016 dbpath=/data/db 64-bit host=
2023-08-27T07:05:42.660+0300 I CONTROL [initandlisten] db version v4.2.3
2023-08-27T07:05:42.660+0300 I CONTROL [initandlisten] git version: 6874650b362138df74be53d366bbefc321ea32d4
2023-08-27T07:05:42.660+0300 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.1 11 Sep 2018
2023-08-27T07:05:42.660+0300 I CONTROL [initandlisten] allocator: tcmalloc
2023-08-27T07:05:42.660+0300 I CONTROL [initandlisten] modules: none
2023-08-27T07:05:42.660+0300 I CONTROL [initandlisten] build environment:
2023-08-27T07:05:42.660+0300 I CONTROL [initandlisten] distmod: ubuntu1804
2023-08-27T07:05:42.660+0300 I CONTROL [initandlisten] distarch: x86_64
2023-08-27T07:05:42.660+0300 I CONTROL [initandlisten] target_arch: x86_64
2023-08-27T07:05:42.660+0300 I CONTROL [initandlisten] options: { config: “/etc/mongod.conf”, net: { bindIp: “*”, port: 27016 }, processManagement: { timeZoneInfo: “/usr/share/zoneinfo” }, replication: { replSetName: “octopusrs0” }, security: { authorization: “enabled”, keyFile: “/etc/mongo-keyfile” }, storage: { dbPath: “/data/db”, journal: { enabled: true } }, systemLog: { destination: “file”, logAppend: true, path: “/var/log/mongodb/mongod.log” } }
2023-08-27T07:05:42.666+0300 W STORAGE [initandlisten] Detected unclean shutdown - /data/db/mongod.lock is not empty.
2023-08-27T07:05:42.666+0300 I STORAGE [initandlisten] Detected data files in /data/db created by the ‘wiredTiger’ storage engine, so setting the active storage engine to ‘wiredTiger’.
2023-08-27T07:05:42.666+0300 W STORAGE [initandlisten] Recovering data from the last clean checkpoint.
2023-08-27T07:05:42.666+0300 I STORAGE [initandlisten]
2023-08-27T07:05:42.666+0300 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2023-08-27T07:05:42.666+0300 I STORAGE [initandlisten] ** See
2023-08-27T07:05:42.666+0300 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=63752M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress],
2023-08-27T07:05:43.279+0300 I STORAGE [initandlisten] WiredTiger message [1693109143:279137][1:0x7fbd88dafb00], txn-recover: Recovering log 3688 through 3689
2023-08-27T07:05:43.325+0300 I STORAGE [initandlisten] WiredTiger message [1693109143:325574][1:0x7fbd88dafb00], txn-recover: Recovering log 3689 through 3689
2023-08-27T07:05:43.438+0300 I STORAGE [initandlisten] WiredTiger message [1693109143:438361][1:0x7fbd88dafb00], txn-recover: Main recovery loop: starting at 3688/60554624 to 3689/256
2023-08-27T07:05:43.439+0300 I STORAGE [initandlisten] WiredTiger message [1693109143:439450][1:0x7fbd88dafb00], txn-recover: Recovering log 3688 through 3689
2023-08-27T07:05:43.480+0300 I STORAGE [initandlisten] WiredTiger message [1693109143:480413][1:0x7fbd88dafb00], file:sizeStorer.wt, txn-recover: Recovering log 3689 through 3689
2023-08-27T07:05:43.538+0300 I STORAGE [initandlisten] WiredTiger message [1693109143:538390][1:0x7fbd88dafb00], file:sizeStorer.wt, txn-recover: Set global recovery timestamp: (1693100255, 1)
2023-08-27T07:05:43.559+0300 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(1693100255, 1)
2023-08-27T07:05:43.574+0300 I STORAGE [initandlisten] Starting OplogTruncaterThread
2023-08-27T07:05:43.574+0300 I STORAGE [initandlisten] The size storer reports that the oplog contains 28315505 records totaling to 53595282159 bytes
2023-08-27T07:05:43.574+0300 I STORAGE [initandlisten] Sampling the oplog to determine where to place markers for truncation
2023-08-27T07:05:43.575+0300 I STORA

How can I diagnose the reason for the crash?


Nothing to indicate why in the mongod log.

Check your docker, server and kernel logs between the getMore and server restart timestamps.