Can't start the mongod service after modify the mongo.conf file

Hi,
not able to reload the mongod service after modifying the .conf file , i just enabled the security and replicaset on mongod.conf file. i have tried. both 4.4 and 6.0 version

mongodb service go to active state while disable the security or replicaset in conf file.

what is your config like?

any error messages in log file?

@Kobe_W , here i included log and configuration

mongod.log

{“t”:{“$date”:“2023-06-08T14:05:24.149+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:23377, “ctx”:“SignalHandler”,“msg”:“Received signal”,“attr”:{“signal”:15,“error”:“Terminated”}}
{“t”:{“$date”:“2023-06-08T14:05:24.149+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:23378, “ctx”:“SignalHandler”,“msg”:“Signal was sent by kill(2)”,“attr”:{“pid”:1,“uid”:0}}
{“t”:{“$date”:“2023-06-08T14:05:24.149+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:23381, “ctx”:“SignalHandler”,“msg”:“will terminate after current cmd ends”}
{“t”:{“$date”:“2023-06-08T14:05:24.149+05:30”},“s”:“I”, “c”:“REPL”, “id”:4784900, “ctx”:“SignalHandler”,“msg”:“Stepping down the ReplicationCoordinator for shutdown”,“attr”:{“waitTimeMillis”:10000}}
{“t”:{“$date”:“2023-06-08T14:05:24.149+05:30”},“s”:“I”, “c”:“COMMAND”, “id”:4784901, “ctx”:“SignalHandler”,“msg”:“Shutting down the MirrorMaestro”}
{“t”:{“$date”:“2023-06-08T14:05:24.149+05:30”},“s”:“I”, “c”:“REPL”, “id”:40441, “ctx”:“SignalHandler”,“msg”:“Stopping TopologyVersionObserver”}
{“t”:{“$date”:“2023-06-08T14:05:24.149+05:30”},“s”:“I”, “c”:“REPL”, “id”:40447, “ctx”:“TopologyVersionObserver”,“msg”:“Stopped TopologyVersionObserver”}
{“t”:{“$date”:“2023-06-08T14:05:24.150+05:30”},“s”:“I”, “c”:“SHARDING”, “id”:4784902, “ctx”:“SignalHandler”,“msg”:“Shutting down the WaitForMajorityService”}
{“t”:{“$date”:“2023-06-08T14:05:24.150+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:4784903, “ctx”:“SignalHandler”,“msg”:“Shutting down the LogicalSessionCache”}
{“t”:{“$date”:“2023-06-08T14:05:24.151+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:20562, “ctx”:“SignalHandler”,“msg”:“Shutdown: going to close listening sockets”}
{“t”:{“$date”:“2023-06-08T14:05:24.151+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:23017, “ctx”:“listener”,“msg”:“removing socket file”,“attr”:{“path”:“/tmp/mongodb-27017.sock”}}
{“t”:{“$date”:“2023-06-08T14:05:24.152+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4784905, “ctx”:“SignalHandler”,“msg”:“Shutting down the global connection pool”}
{“t”:{“$date”:“2023-06-08T14:05:24.152+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:4784906, “ctx”:“SignalHandler”,“msg”:“Shutting down the FlowControlTicketholder”}
{“t”:{“$date”:“2023-06-08T14:05:24.152+05:30”},“s”:“I”, “c”:“-”, “id”:20520, “ctx”:“SignalHandler”,“msg”:“Stopping further Flow Control ticket acquisitions.”}
{“t”:{“$date”:“2023-06-08T14:05:24.152+05:30”},“s”:“I”, “c”:“REPL”, “id”:4784907, “ctx”:“SignalHandler”,“msg”:“Shutting down the replica set node executor”}
{“t”:{“$date”:“2023-06-08T14:05:24.153+05:30”},“s”:“I”, “c”:“ASIO”, “id”:22582, “ctx”:“ReplNodeDbWorkerNetwork”,“msg”:“Killing all outstanding egress activity.”}
{“t”:{“$date”:“2023-06-08T14:05:24.153+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:4784908, “ctx”:“SignalHandler”,“msg”:“Shutting down the PeriodicThreadToAbortExpiredTransactions”}
{“t”:{“$date”:“2023-06-08T14:05:24.153+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:4784934, “ctx”:“SignalHandler”,“msg”:“Shutting down the PeriodicThreadToDecreaseSnapshotHistoryCachePressure”}
{“t”:{“$date”:“2023-06-08T14:05:24.153+05:30”},“s”:“I”, “c”:“REPL”, “id”:4784909, “ctx”:“SignalHandler”,“msg”:“Shutting down the ReplicationCoordinator”}
{“t”:{“$date”:“2023-06-08T14:05:24.153+05:30”},“s”:“I”, “c”:“REPL”, “id”:21328, “ctx”:“SignalHandler”,“msg”:“Shutting down replication subsystems”}
{“t”:{“$date”:“2023-06-08T14:05:24.153+05:30”},“s”:“I”, “c”:“REPL”, “id”:21302, “ctx”:“SignalHandler”,“msg”:“Stopping replication reporter thread”}
{“t”:{“$date”:“2023-06-08T14:05:24.153+05:30”},“s”:“I”, “c”:“REPL”, “id”:21303, “ctx”:“SignalHandler”,“msg”:“Stopping replication fetcher thread”}
{“t”:{“$date”:“2023-06-08T14:05:24.153+05:30”},“s”:“I”, “c”:“REPL”, “id”:21304, “ctx”:“SignalHandler”,“msg”:“Stopping replication applier thread”}
{“t”:{“$date”:“2023-06-08T14:05:24.573+05:30”},“s”:“I”, “c”:“-”, “id”:4333222, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM received failed isMaster”,“attr”:{“host”:“tp-testreplica2:27017”,“error”:“HostUnreachable: Error connecting to tp-testreplica2:27017 (192.168.1.191:27017) :: caused by :: Connection refused”,“replicaSet”:“tp1”,“isMasterReply”:“{}”}}
{“t”:{“$date”:“2023-06-08T14:05:24.573+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4712102, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“Host failed in replica set”,“attr”:{“replicaSet”:“tp1”,“host”:“tp-testreplica2:27017”,“error”:{“code”:6,“codeName”:“HostUnreachable”,“errmsg”:“Error connecting to tp-testreplica2:27017 (192.168.1.191:27017) :: caused by :: Connection refused”},“action”:{“dropConnections”:true,“requestImmediateCheck”:false,“outcome”:{“host”:“tp-testreplica2:27017”,“success”:false,“errorMessage”:“HostUnreachable: Error connecting to tp-testreplica2:27017 (192.168.1.191:27017) :: caused by :: Connection refused”}}}}
{“t”:{“$date”:“2023-06-08T14:05:24.621+05:30”},“s”:“I”, “c”:“REPL_HB”, “id”:23974, “ctx”:“ReplCoord-0”,“msg”:“Heartbeat failed after max retries”,“attr”:{“target”:“tp-testreplica2:27017”,“maxHeartbeatRetries”:2,“error”:{“code”:6,“codeName”:“HostUnreachable”,“errmsg”:“Error connecting to tp-testreplica2:27017 (192.168.1.191:27017) :: caused by :: Connection refused”}}}
{“t”:{“$date”:“2023-06-08T14:05:25.073+05:30”},“s”:“I”, “c”:“CONNPOOL”, “id”:22576, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“Connecting”,“attr”:{“hostAndPort”:“tp-testreplica2:27017”}}
{“t”:{“$date”:“2023-06-08T14:05:25.073+05:30”},“s”:“I”, “c”:“-”, “id”:4333222, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“RSM received failed isMaster”,“attr”:{“host”:“tp-testreplica2:27017”,“error”:“HostUnreachable: Error connecting to tp-testreplica2:27017 (192.168.1.191:27017) :: caused by :: Connection refused”,“replicaSet”:“tp1”,“isMasterReply”:“{}”}}
{“t”:{“$date”:“2023-06-08T14:05:25.073+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4712102, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“Host failed in replica set”,“attr”:{“replicaSet”:“tp1”,“host”:“tp-testreplica2:27017”,“error”:{“code”:6,“codeName”:“HostUnreachable”,“errmsg”:“Error connecting to tp-testreplica2:27017 (192.168.1.191:27017) :: caused by :: Connection refused”},“action”:{“dropConnections”:true,“requestImmediateCheck”:true}}}
{“t”:{“$date”:“2023-06-08T14:05:25.075+05:30”},“s”:“I”, “c”:“REPL”, “id”:21225, “ctx”:“OplogApplier-0”,“msg”:“Finished oplog application”}
{“t”:{“$date”:“2023-06-08T14:05:25.078+05:30”},“s”:“I”, “c”:“REPL”, “id”:21107, “ctx”:“BackgroundSync”,“msg”:“Stopping replication producer”}
{“t”:{“$date”:“2023-06-08T14:05:25.078+05:30”},“s”:“I”, “c”:“REPL”, “id”:21307, “ctx”:“SignalHandler”,“msg”:“Stopping replication storage threads”}
{“t”:{“$date”:“2023-06-08T14:05:25.078+05:30”},“s”:“I”, “c”:“ASIO”, “id”:22582, “ctx”:“OplogApplierNetwork”,“msg”:“Killing all outstanding egress activity.”}
{“t”:{“$date”:“2023-06-08T14:05:25.078+05:30”},“s”:“I”, “c”:“ASIO”, “id”:22582, “ctx”:“ReplCoordExternNetwork”,“msg”:“Killing all outstanding egress activity.”}
{“t”:{“$date”:“2023-06-08T14:05:25.079+05:30”},“s”:“I”, “c”:“ASIO”, “id”:22582, “ctx”:“ReplNetwork”,“msg”:“Killing all outstanding egress activity.”}
{“t”:{“$date”:“2023-06-08T14:05:25.079+05:30”},“s”:“I”, “c”:“SHARDING”, “id”:4784910, “ctx”:“SignalHandler”,“msg”:“Shutting down the ShardingInitializationMongoD”}
{“t”:{“$date”:“2023-06-08T14:05:25.079+05:30”},“s”:“I”, “c”:“REPL”, “id”:4784911, “ctx”:“SignalHandler”,“msg”:“Enqueuing the ReplicationStateTransitionLock for shutdown”}
{“t”:{“$date”:“2023-06-08T14:05:25.079+05:30”},“s”:“I”, “c”:“-”, “id”:4784912, “ctx”:“SignalHandler”,“msg”:“Killing all operations for shutdown”}
{“t”:{“$date”:“2023-06-08T14:05:25.079+05:30”},“s”:“I”, “c”:“-”, “id”:4695300, “ctx”:“SignalHandler”,“msg”:“Interrupted all currently running operations”,“attr”:{“opsKilled”:6}}
{“t”:{“$date”:“2023-06-08T14:05:25.079+05:30”},“s”:“I”, “c”:“COMMAND”, “id”:4784913, “ctx”:“SignalHandler”,“msg”:“Shutting down all open transactions”}
{“t”:{“$date”:“2023-06-08T14:05:25.079+05:30”},“s”:“I”, “c”:“REPL”, “id”:4784914, “ctx”:“SignalHandler”,“msg”:“Acquiring the ReplicationStateTransitionLock for shutdown”}
{“t”:{“$date”:“2023-06-08T14:05:25.079+05:30”},“s”:“I”, “c”:“INDEX”, “id”:4784915, “ctx”:“SignalHandler”,“msg”:“Shutting down the IndexBuildsCoordinator”}
{“t”:{“$date”:“2023-06-08T14:05:25.079+05:30”},“s”:“I”, “c”:“REPL”, “id”:4784916, “ctx”:“SignalHandler”,“msg”:“Reacquiring the ReplicationStateTransitionLock for shutdown”}
{“t”:{“$date”:“2023-06-08T14:05:25.079+05:30”},“s”:“I”, “c”:“REPL”, “id”:4784917, “ctx”:“SignalHandler”,“msg”:“Attempting to mark clean shutdown”}
{“t”:{“$date”:“2023-06-08T14:05:25.079+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4784918, “ctx”:“SignalHandler”,“msg”:“Shutting down the ReplicaSetMonitor”}
{“t”:{“$date”:“2023-06-08T14:05:25.079+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4333209, “ctx”:“SignalHandler”,“msg”:“Closing Replica Set Monitor”,“attr”:{“replicaSet”:“tp1”}}
{“t”:{“$date”:“2023-06-08T14:05:25.079+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:4333210, “ctx”:“SignalHandler”,“msg”:“Done closing Replica Set Monitor”,“attr”:{“replicaSet”:“tp1”}}
{“t”:{“$date”:“2023-06-08T14:05:25.079+05:30”},“s”:“I”, “c”:“ASIO”, “id”:22582, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“Killing all outstanding egress activity.”}
{“t”:{“$date”:“2023-06-08T14:05:25.079+05:30”},“s”:“I”, “c”:“CONNPOOL”, “id”:22572, “ctx”:“ReplicaSetMonitor-TaskExecutor”,“msg”:“Dropping all pooled connections”,“attr”:{“hostAndPort”:“tp-testreplica1:27017”,“error”:“ShutdownInProgress: Shutting down the connection pool”}}
{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:22944, “ctx”:“conn2”,“msg”:“Connection ended”,“attr”:{“remote”:“192.168.1.190:52606”,“connectionId”:2,“connectionCount”:1}}
{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“REPL”, “id”:4784920, “ctx”:“SignalHandler”,“msg”:“Shutting down the LogicalTimeValidator”}
{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“NETWORK”, “id”:22944, “ctx”:“conn3”,“msg”:“Connection ended”,“attr”:{“remote”:“192.168.1.190:52612”,“connectionId”:3,“connectionCount”:0}}
{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“SHARDING”, “id”:4784921, “ctx”:“SignalHandler”,“msg”:“Shutting down the MigrationUtilExecutor”}
{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:4784925, “ctx”:“SignalHandler”,“msg”:“Shutting down free monitoring”}
{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:20609, “ctx”:“SignalHandler”,“msg”:“Shutting down free monitoring”}
{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:4784927, “ctx”:“SignalHandler”,“msg”:“Shutting down the HealthLog”}
{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:4784929, “ctx”:“SignalHandler”,“msg”:“Acquiring the global lock for shutdown”}
{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:4784930, “ctx”:“SignalHandler”,“msg”:“Shutting down the storage engine”}
{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:22320, “ctx”:“SignalHandler”,“msg”:“Shutting down journal flusher thread”}
{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:22321, “ctx”:“SignalHandler”,“msg”:“Finished shutting down journal flusher thread”}
{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:20282, “ctx”:“SignalHandler”,“msg”:“Deregistering all the collections”}
{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:22372, “ctx”:“OplogVisibilityThread”,“msg”:“Oplog visibility thread shutting down.”}
{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:22261, “ctx”:“SignalHandler”,“msg”:“Timestamp monitor shutting down”}
{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:22317, “ctx”:“SignalHandler”,“msg”:“WiredTigerKVEngine shutting down”}
{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:22318, “ctx”:“SignalHandler”,“msg”:“Shutting down session sweeper thread”}
{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:22319, “ctx”:“SignalHandler”,“msg”:“Finished shutting down session sweeper thread”}
{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:22322, “ctx”:“SignalHandler”,“msg”:“Shutting down checkpoint thread”}
{“t”:{“$date”:“2023-06-08T14:05:25.080+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:22323, “ctx”:“SignalHandler”,“msg”:“Finished shutting down checkpoint thread”}
{“t”:{“$date”:“2023-06-08T14:05:25.081+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:4795902, “ctx”:“SignalHandler”,“msg”:“Closing WiredTiger”,“attr”:{“closeConfig”:“leak_memory=true,”}}
{“t”:{“$date”:“2023-06-08T14:05:25.082+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:22430, “ctx”:“SignalHandler”,“msg”:“WiredTiger message”,“attr”:{“message”:“[1686213325:82578][98841:0x7faf3b10d700], close_ckpt: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 10, snapshot max: 10 snapshot count: 0, oldest timestamp: (1686080677, 1) , meta checkpoint timestamp: (1686080677, 1) base write gen: 2908”}}
{“t”:{“$date”:“2023-06-08T14:05:25.089+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:4795901, “ctx”:“SignalHandler”,“msg”:“WiredTiger closed”,“attr”:{“durationMillis”:8}}
{“t”:{“$date”:“2023-06-08T14:05:25.089+05:30”},“s”:“I”, “c”:“STORAGE”, “id”:22279, “ctx”:“SignalHandler”,“msg”:“shutdown: removing fs lock…”}
{“t”:{“$date”:“2023-06-08T14:05:25.089+05:30”},“s”:“I”, “c”:“-”, “id”:4784931, “ctx”:“SignalHandler”,“msg”:“Dropping the scope cache for shutdown”}
{“t”:{“$date”:“2023-06-08T14:05:25.089+05:30”},“s”:“I”, “c”:“FTDC”, “id”:4784926, “ctx”:“SignalHandler”,“msg”:“Shutting down full-time data capture”}
{“t”:{“$date”:“2023-06-08T14:05:25.089+05:30”},“s”:“I”, “c”:“FTDC”, “id”:20626, “ctx”:“SignalHandler”,“msg”:“Shutting down full-time diagnostic data capture”}
{“t”:{“$date”:“2023-06-08T14:05:25.092+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:20565, “ctx”:“SignalHandler”,“msg”:“Now exiting”}
{“t”:{“$date”:“2023-06-08T14:05:25.092+05:30”},“s”:“I”, “c”:“CONTROL”, “id”:23138, “ctx”:“SignalHandler”,“msg”:“Shutting down”,“attr”:{“exitCode”:0}}

#mongod.conf

for documentation of all options, see:

http://docs.mongodb.org/manual/reference/configuration-options/

where to write logging data.

systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log

Where and how to store data.

storage:
dbPath: /var/lib/mongo
journal:
enabled: true

engine:

wiredTiger:

how the process runs

processManagement:
timeZoneInfo: /usr/share/zoneinfo

network interfaces

net:
port: 27017
bindIp: 0.0.0.0 # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.

security:
authorization: enabled
keyfile: /root/keyfile/keyfile_mongod
#transitionToAuth: true

#operationProfiling:

replication:
replSetName: “tp1”

#sharding:

Enterprise-Only Options

#auditLog:

#snmp:

[Unit]
Description=MongoDB Database Server
Documentation=https://docs.mongodb.org/manual
After=network-online.target
Wants=network-online.target

[Service]
User=mongod
Group=mongod
Environment=“OPTIONS=-f /etc/mongod.conf”
Environment=“MONGODB_CONFIG_OVERRIDE_NOFORK=1”
EnvironmentFile=-/etc/sysconfig/mongod
ExecStart=/usr/bin/mongod $OPTIONS

file size

LimitFSIZE=infinity

cpu time

LimitCPU=infinity

virtual memory size

LimitAS=infinity

open files

LimitNOFILE=64000

processes/threads

LimitNPROC=64000

locked memory

LimitMEMLOCK=infinity

total threads (user+kernel)

TasksMax=infinity
TasksAccounting=false

Recommended limits for mongod as specified in

https://docs.mongodb.com/manual/reference/ulimit/#recommended-ulimit-settings

[Install]
WantedBy=multi-user.target

Hi @Mohamed_Ismail

I’m assuming you installed MongoDB using some service e.g. brew or some package management? Note that in most cases, MongoDB installed by those management systems are meant to be used as a development platform, and thus very lax in security, and many of them deploy as a standalone node.

If this is for development, please try to enable replication first and not auth, and see if it works. This is so you don’t end up trying to solve two things at once.

See Deploy a Replica Set

Once that works, then enable auth for the replica set.

See Update Replica Set to Keyfile Authentication

Best regards
Kevin