1

i have the following problem: I want to mount different CephFS-Volumes to my pods but I can only mount the first created CephFS.

I have different Filesystems in my Ceph:

  • volume1
  • volume2
  • etc.

Only "volume1" gets mounted all the time, it doesnt matter what User or Secret I use

I have the following configuration in my pod:

volumes:
  - name: cephfs-volume2
    cephfs:
      monitors:
      - XXX.XXX.XXX.XXX:6789
      - XXX.XXX.XXX.XXX:6789
      - XXX.XXX.XXX.XXX:6789
      path: /
      user: MY_USERNAME
      secretRef:
        name: MY_SECRET
      readOnly: false

Here is the Output of my "ceph auth get" for my "volume2"

client.MY_USERNAME
key: ****MASKED****
caps: [mds] allow rw
caps: [mon] allow r
caps: [osd] allow rwx tag cephfs data=volume2

When I mount the specific cephfs-data on my Host with (mount) I can specify "fs=volume2" and that works but within Kubernetes there isn´t such option.

Does anyone know how to fix that specific problem or isn´t it possible with my setup? Otherwise I would just mount the CephFS on my Hosts and use them as "hostPath" mounts.

GE ownt
  • 11
  • 2

0 Answers0