Connect mongod exiting with code 1, can not see specific any error in log file

Very sorry for my short question but I don’t know how to describe my problem. Here is info when I try to run mongo shell:

    MongoDB shell version v5.0.3
    connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
    Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
    connect@src/mongo/shell/mongo.js:372:17
    @(connect):2:6
    exception: connect failed
    exiting with code 1

And here are some logs:

    {"t":{"$date":"2021-10-20T16:32:14.249+07:00"},"s":"I",  "c":"NETWORK",  "id":23017,   "ctx":"listener","msg":"removing socket file","attr":{"path":"/tmp/mongodb-27017.sock"}}
    {"t":{"$date":"2021-10-20T16:32:14.249+07:00"},"s":"I",  "c":"NETWORK",  "id":4784905, "ctx":"SignalHandler","msg":"Shutting down the global connection pool"}
    {"t":{"$date":"2021-10-20T16:32:14.249+07:00"},"s":"I",  "c":"CONTROL",  "id":4784906, "ctx":"SignalHandler","msg":"Shutting down the FlowControlTicketholder"}
    {"t":{"$date":"2021-10-20T16:32:14.249+07:00"},"s":"I",  "c":"-",        "id":20520,   "ctx":"SignalHandler","msg":"Stopping further Flow Control ticket acquisitions."}
    {"t":{"$date":"2021-10-20T16:32:14.249+07:00"},"s":"I",  "c":"CONTROL",  "id":4784908, "ctx":"SignalHandler","msg":"Shutting down the PeriodicThreadToAbortExpiredTransactions"}
    {"t":{"$date":"2021-10-20T16:32:14.249+07:00"},"s":"I",  "c":"REPL",     "id":4784909, "ctx":"SignalHandler","msg":"Shutting down the ReplicationCoordinator"}
    {"t":{"$date":"2021-10-20T16:32:14.249+07:00"},"s":"I",  "c":"SHARDING", "id":4784910, "ctx":"SignalHandler","msg":"Shutting down the ShardingInitializationMongoD"}
    {"t":{"$date":"2021-10-20T16:32:14.249+07:00"},"s":"I",  "c":"REPL",     "id":4784911, "ctx":"SignalHandler","msg":"Enqueuing the ReplicationStateTransitionLock for shutdown"}
    {"t":{"$date":"2021-10-20T16:32:14.249+07:00"},"s":"I",  "c":"-",        "id":4784912, "ctx":"SignalHandler","msg":"Killing all operations for shutdown"}
    {"t":{"$date":"2021-10-20T16:32:14.249+07:00"},"s":"I",  "c":"-",        "id":4695300, "ctx":"SignalHandler","msg":"Interrupted all currently running operations","attr":{"opsKilled":3}}
    {"t":{"$date":"2021-10-20T16:32:14.249+07:00"},"s":"I",  "c":"TENANT_M", "id":5093807, "ctx":"SignalHandler","msg":"Shutting down all TenantMigrationAccessBlockers on global shutdown"}
    {"t":{"$date":"2021-10-20T16:32:14.249+07:00"},"s":"I",  "c":"COMMAND",  "id":4784913, "ctx":"SignalHandler","msg":"Shutting down all open transactions"}
    {"t":{"$date":"2021-10-20T16:32:14.250+07:00"},"s":"I",  "c":"REPL",     "id":4784914, "ctx":"SignalHandler","msg":"Acquiring the ReplicationStateTransitionLock for shutdown"}
    {"t":{"$date":"2021-10-20T16:32:14.250+07:00"},"s":"I",  "c":"INDEX",    "id":4784915, "ctx":"SignalHandler","msg":"Shutting down the IndexBuildsCoordinator"}
    {"t":{"$date":"2021-10-20T16:32:14.250+07:00"},"s":"I",  "c":"REPL",     "id":4784916, "ctx":"SignalHandler","msg":"Reacquiring the ReplicationStateTransitionLock for shutdown"}
    {"t":{"$date":"2021-10-20T16:32:14.250+07:00"},"s":"I",  "c":"REPL",     "id":4784917, "ctx":"SignalHandler","msg":"Attempting to mark clean shutdown"}
    {"t":{"$date":"2021-10-20T16:32:14.250+07:00"},"s":"I",  "c":"NETWORK",  "id":4784918, "ctx":"SignalHandler","msg":"Shutting down the ReplicaSetMonitor"}
    {"t":{"$date":"2021-10-20T16:32:14.250+07:00"},"s":"I",  "c":"SHARDING", "id":4784921, "ctx":"SignalHandler","msg":"Shutting down the MigrationUtilExecutor"}
    {"t":{"$date":"2021-10-20T16:32:14.250+07:00"},"s":"I",  "c":"ASIO",     "id":22582,   "ctx":"MigrationUtil-TaskExecutor","msg":"Killing all outstanding egress activity."}
    {"t":{"$date":"2021-10-20T16:32:14.250+07:00"},"s":"I",  "c":"COMMAND",  "id":4784923, "ctx":"SignalHandler","msg":"Shutting down the ServiceEntryPoint"}
    {"t":{"$date":"2021-10-20T16:32:14.250+07:00"},"s":"I",  "c":"CONTROL",  "id":4784925, "ctx":"SignalHandler","msg":"Shutting down free monitoring"}
    {"t":{"$date":"2021-10-20T16:32:14.250+07:00"},"s":"I",  "c":"CONTROL",  "id":20609,   "ctx":"SignalHandler","msg":"Shutting down free monitoring"}
    {"t":{"$date":"2021-10-20T16:32:14.250+07:00"},"s":"I",  "c":"CONTROL",  "id":4784927, "ctx":"SignalHandler","msg":"Shutting down the HealthLog"}
    {"t":{"$date":"2021-10-20T16:32:14.250+07:00"},"s":"I",  "c":"CONTROL",  "id":4784928, "ctx":"SignalHandler","msg":"Shutting down the TTL monitor"}
    {"t":{"$date":"2021-10-20T16:32:14.250+07:00"},"s":"I",  "c":"INDEX",    "id":3684100, "ctx":"SignalHandler","msg":"Shutting down TTL collection monitor thread"}
    {"t":{"$date":"2021-10-20T16:32:14.250+07:00"},"s":"I",  "c":"INDEX",    "id":3684101, "ctx":"SignalHandler","msg":"Finished shutting down TTL collection monitor thread"}
    {"t":{"$date":"2021-10-20T16:32:14.250+07:00"},"s":"I",  "c":"CONTROL",  "id":4784929, "ctx":"SignalHandler","msg":"Acquiring the global lock for shutdown"}
    {"t":{"$date":"2021-10-20T16:32:14.250+07:00"},"s":"I",  "c":"CONTROL",  "id":4784930, "ctx":"SignalHandler","msg":"Shutting down the storage engine"}
    {"t":{"$date":"2021-10-20T16:32:14.250+07:00"},"s":"I",  "c":"STORAGE",  "id":22320,   "ctx":"SignalHandler","msg":"Shutting down journal flusher thread"}
    {"t":{"$date":"2021-10-20T16:32:14.250+07:00"},"s":"I",  "c":"STORAGE",  "id":22321,   "ctx":"SignalHandler","msg":"Finished shutting down journal flusher thread"}
    {"t":{"$date":"2021-10-20T16:32:14.250+07:00"},"s":"I",  "c":"STORAGE",  "id":22322,   "ctx":"SignalHandler","msg":"Shutting down checkpoint thread"}
    {"t":{"$date":"2021-10-20T16:32:14.251+07:00"},"s":"I",  "c":"STORAGE",  "id":22323,   "ctx":"SignalHandler","msg":"Finished shutting down checkpoint thread"}
    {"t":{"$date":"2021-10-20T16:32:14.251+07:00"},"s":"I",  "c":"STORAGE",  "id":20282,   "ctx":"SignalHandler","msg":"Deregistering all the collections"}
    {"t":{"$date":"2021-10-20T16:32:14.251+07:00"},"s":"I",  "c":"STORAGE",  "id":22261,   "ctx":"SignalHandler","msg":"Timestamp monitor shutting down"}
    {"t":{"$date":"2021-10-20T16:32:14.251+07:00"},"s":"I",  "c":"STORAGE",  "id":22317,   "ctx":"SignalHandler","msg":"WiredTigerKVEngine shutting down"}
    {"t":{"$date":"2021-10-20T16:32:14.251+07:00"},"s":"I",  "c":"STORAGE",  "id":22318,   "ctx":"SignalHandler","msg":"Shutting down session sweeper thread"}
    {"t":{"$date":"2021-10-20T16:32:14.251+07:00"},"s":"I",  "c":"STORAGE",  "id":22319,   "ctx":"SignalHandler","msg":"Finished shutting down session sweeper thread"}
    {"t":{"$date":"2021-10-20T16:32:14.251+07:00"},"s":"I",  "c":"STORAGE",  "id":4795902, "ctx":"SignalHandler","msg":"Closing WiredTiger","attr":{"closeConfig":"leak_memory=true,"}}
    {"t":{"$date":"2021-10-20T16:32:14.252+07:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"SignalHandler","msg":"WiredTiger message","attr":{"message":"[1634722334:252129][142289:0x7f6df9745700], close_ckpt: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 18, snapshot max: 18 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 7"}}
    {"t":{"$date":"2021-10-20T16:32:14.274+07:00"},"s":"I",  "c":"STORAGE",  "id":4795901, "ctx":"SignalHandler","msg":"WiredTiger closed","attr":{"durationMillis":23}}
    {"t":{"$date":"2021-10-20T16:32:14.274+07:00"},"s":"I",  "c":"STORAGE",  "id":22279,   "ctx":"SignalHandler","msg":"shutdown: removing fs lock..."}
    {"t":{"$date":"2021-10-20T16:32:14.274+07:00"},"s":"I",  "c":"-",        "id":4784931, "ctx":"SignalHandler","msg":"Dropping the scope cache for shutdown"}
    {"t":{"$date":"2021-10-20T16:32:14.274+07:00"},"s":"I",  "c":"FTDC",     "id":4784926, "ctx":"SignalHandler","msg":"Shutting down full-time data capture"}
    {"t":{"$date":"2021-10-20T16:32:14.274+07:00"},"s":"I",  "c":"FTDC",     "id":20626,   "ctx":"SignalHandler","msg":"Shutting down full-time diagnostic data capture"}
    {"t":{"$date":"2021-10-20T16:32:14.279+07:00"},"s":"I",  "c":"CONTROL",  "id":20565,   "ctx":"SignalHandler","msg":"Now exiting"}
    {"t":{"$date":"2021-10-20T16:32:14.279+07:00"},"s":"I",  "c":"CONTROL",  "id":23138,   "ctx":"SignalHandler","msg":"Shutting down","attr":{"exitCode":0}}

but when I run sudo mongod -f /etc/mongod.conf, I can connect to mongo shell

Hi @MAY_CHEAPER,

Were you able to find a solution to your issue? Your log snippet starts after shutdown has initiated, so I think the most interesting log lines are missing.

If sudo mongod works fine, my first guess would be that there are problems with file & directory permissions that are ignored when you use sudo to start the mongod process as the root user.

If so, I recommend fixing file & directory permissions so your mongod process can run as an unprivileged user.

Regards,
Stennie

1 Like

I dont know what reason is but after many search on google, I reinstall MongoDB and it works fine. Thanks for your support.

@Stennie_X I have the same issue when I am logged in as root to my system and then installing mongodb wondering why will I have permission issues ? and how do i give permission if at all to resolve this ? after i exec to my pod

Hello friends
I have the same problem and I have been searching for about 2 days and I did not get any result and my problem is not solved. Please help me to solve my problem.
The error that is displayed for me:

i use Centos 7.6 with cpanel,litespeed,cloudlinux

[root@srv ~]# mongo
MongoDB shell version v5.0.15
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
connect@src/mongo/shell/mongo.js:372:17
@(connect):2:6
exception: connect failed
exiting with code 1

I also checked the port, but it doesn’t show anything:

[root@srv ~]# netstat -an | grep 27017
[root@srv ~]#

and the /etc/mongod.conf file is as follows:

ation: file
  logAppend: true
  path: /var/log/mongodb/mongod.log

# Where and how to store data.
storage:
  dbPath: /var/lib/mongo
  journal:
    enabled: true
#  engine:
#  wiredTiger:

# how the process runs
processManagement:
  fork: true  # fork and run in background
  pidFilePath: /var/run/mongodb/mongod.pid  # location of pidfile
  timeZoneInfo: /usr/share/zoneinfo

# network interfaces
net:
  port: 27017
  bindIp: 127.0.0.1,<my server ip for example 178.1.1.1>  # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.


security:
#authorization: enabled

#operationProfiling:

#replication:

#sharding:

## Enterprise-Only Options

#auditLog:

Your mongod should be up & running for you to connect to it
If you installed it as service need to start the service first
If not installed as service you need to start your mongod manually from command line with appropriate parameters

The Mongo service is installed but does not start:

[root@srv ~]# sudo service mongodb start
Redirecting to /bin/systemctl start mongodb.service
Failed to start mongodb.service: Unit not found.

and status:

[root@srv ~]# service mongod status
Redirecting to /bin/systemctl status mongod.service
● mongod.service - MongoDB Database Server
   Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Mon 2023-02-27 16:57:53 +0330; 2h 20min ago
     Docs: https://docs.mongodb.org/manual
 Main PID: 1047495 (code=exited, status=2)

Feb 27 16:57:53 srv.sayansite.com systemd[1]: Started MongoDB Database Server.
Feb 27 16:57:53 srv.sayansite.com systemd[1]: mongod.service: main process exited, code=exited, status=2/INVALIDARGUMENT
Feb 27 16:57:53 srv.sayansite.com systemd[1]: Unit mongod.service entered failed state.
Feb 27 16:57:53 srv.sayansite.com systemd[1]: mongod.service failed.

pleas help me.

What is funny is that you use the correct name for

but the wrong one to

You are right, but it will be redirected to service mongod start.

To mongodb.service perhaps as indicated by the warning:

which is still wrong, otherwise you would not get an error message that says

As mentioned, you are using the correct name, that is mongod when you are querying the status but the wrong name, that is mongodb when you try to start the service.

A reinstall helped me as well, but I had to move mongodb out of the way first. Something about the directory being in place that was preventing mongo from grabbing it again. I’m using brew…

mv /opt/homebrew/var/mongodb /opt/homebrew/var/mongodb-old
brew reinstall mongodb-community@4.4
brew services restart mongodb/brew/mongodb-community@4.4