K8s statefulset auth not working

I’ve had a lot of trouble getting the operator working, but I really don’t need it. I only want a very simple setup where I have an app that needs to connect to a standalone mongodb instance. This works fine, but it does not enable any sort of authentication. I’ve tried adding MONGODB_INITDB_ROOT_USERNAME and MONGODB_INITDB_ROOT_PASSWORD but when I try to connect to the db using those credentials, it doesn’t find the user. Here’s my basic example yaml files:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mongo-pvc
spec:
  accessModes: [ReadWriteOnce]
  resources: { requests: { storage: 9Gi } }
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mongodb
spec:
  serviceName: mongodb
  replicas: 1
  selector:
    matchLabels:
      app: mongodb
  template:
    metadata:
      labels:
        app: mongodb
        selector: mongodb
    spec:
      volumes:
        - name: pvc
          persistentVolumeClaim:
            claimName: mongo-pvc
      containers:
        - name: mongodb
          image: mongo
          env:
            - name: MONGODB_INITDB_ROOT_USERNAME
              value: "myuser"
            - name: MONGODB_INITDB_ROOT_PASSWORD
              value: "mypassword"
          ports:
            - containerPort: 27017
          volumeMounts:
            - name: pvc
              mountPath: /data/db

From everything I’ve seen, the presence of those env vars should cause it to create that user and password, but it doesn’t seem to work. And it doesn’t even enable authentication at all!
I found some more examples that say I need to modify the command run inside the image, basically to make it run mongod --auth. This does successfully enable authentication, but doesn’t create any user, so it’s not helpful.

Any help or guidance would be greatly appreciated. I’ve spent over a week trying dozens of things that everyone claims to work on various forums, and maybe they used to work in 2017 when they were written, but they don’t seem to work with the containers today.
It’s just a small db needed by a web app. I have no need of replicatsets, sharding, or any advanced features. I just want something simple without introducing operators, helm charts or the like.

Hi @Miguel_Hernandez and welcome to MongoDB community forums!!

The Kubernetes Secrets hold confidential information that pods need to access services.

Therefore, you may have to include secrets in your deployment to store usernames and passwords.

In case the above solution does not work, can you provide me with all the necessary YAML files to replicate the issue in my local environment?

Regards
Aasawari

1 Like

Hi @Aasawari

I continue this chat since I encounter the same problem with MongoDB in my k8s cluster. I manage to use secrets for sensitive data but I still face the problem as @Miguel_Hernandez

Do you have an idea on how to handle the problem ?

Regards

Hi @Idrissa_KONKOBO and welcome to the community.

In general it is preferable to start a new discussion to keep the details of different environments/questions separate and improve visibility of new discussions. That will also allow you to mark your topic as “Solved” when you resolve any outstanding questions.

Mentioning the url of an existing discussion on the forum will automatically create links between related discussions for other users to follow.

It would be helpful if you could share the yaml files and the error message that you are seeing while install MongoDB on the kubernetes environment.

Best Regards
Aasawari