Unable to mount fsType:xfs to mongo operator mongodb resource

I have installed MongoDB Enterprise Kubernetes Operator using this steps

When tried to add different storageClass for data and journal volumes on shard members. I want to add xfs file system for data volume. so created below storage class:

     apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: mongo-data-xfs
    provisioner: kubernetes.io/aws-ebs
    parameters:
      type: io1
      iopsPerGB: "10"
      fstype: xfs
      encrypted: "true"
    reclaimPolicy: Retain
    allowVolumeExpansion: true

my sharded cluster deployment looks like this:

apiVersion: mongodb.com/v1
kind: MongoDB
metadata:
  name: ct-sharded-cluster
spec:
  shardCount: 1
  mongodsPerShardCount: 3
  mongosCount: 1
  configServerCount: 1
  version: 4.2.2-ent
  opsManager:
configMapRef:
  name: shard-cluster-config
        # Must match metadata.name in ConfigMap file
  credentials: ops-manager-api
  type: ShardedCluster
  persistent: true
  shardPodSpec:
cpuRequests: 500m
cpu: 500m
memoryRequests: 1G
memory: 1G
persistence:
  multiple:
    data:
      storage: 5G
      storageClass: mongo-data-xfs
    journal:
      storage: 2G
      storageClass: mongo-beside-data
    logs:
      storage: 2G
      storageClass: mongo-beside-data 

No when i deploy this cluster in my operator, journal and logs pods are coming up because it is is fsType=ext4, data volume is coming up, it stuck in ContainerCreating state always. if i describe particular pod, got this failed volume mount message:

Warning FailedMount 30m (x300 over 13h) kubelet, ip-10-0-23-43.ec2.internal Unable to mount volumes for pod “test-cluster-0-0_mongo-operator(385f92e8-07e3-460f-af48-b80c4fe28e45)”: timeout expired waiting for volumes to attach or mount for pod “mongo-operator”/“test-cluster-0-0”. list of unmounted volumes=[data]. list of unattached volumes=[data journal logs mongodb-enterprise-database-pods-token-kdhvp]

Warning FailedMount 5m12s (x458 over 13h) kubelet, ip-10-0-23-43.ec2.internal (combined from similar events): Unable to mount volumes for pod “test-cluster-0-0_mongo-operator(385f92e8-07e3-460f-af48-b80c4fe28e45)”: timeout expired waiting for volumes to attach or mount for pod “mongo-operator”/“tet-cluster-0-0”. list of unmounted volumes=[data]. list of unattached volumes=[data journal logs mongodb-enterprise-database-pods-token-kdhvp]

mount: /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-1a/vol-0143fd796b141be64: wrong fs type, bad option, bad superblock on /dev/xvdch, missing codepage or helper program, or other error.

Any one tried fsType=xfs volume of ebs provisoner in k8s? Any help appreciated.

Thanks.

hi @anand_babu
Your spec looks ok (though the formatting is a bit shifted) and I think the issue (as you noted) is in the cloud provider, not in the Operator. To simplify the scenario I’d recommend to play with some simple pod and mount the PV with this storage class manually to reproduce the issue.
I believe some EBS docs/forums can be of help here, not sure we’ve met this problem before

2 Likes