8

Can we use nfs volume plugin to maintain the High Availability and Disaster Recovery among the kubernetes cluster?

I am running the pod with MongoDB. Getting the error

chown: changing ownership of '/data/db': Operation not permitted .

Cloud any body, Please suggest me how to resolve the error? (or)

Is any alternative volume plugin is suggestible to achieve HA- DR in kubernetes cluster?

coder
  • 8,346
  • 16
  • 39
  • 53
BSG
  • 673
  • 2
  • 13
  • 33
  • Use formatting tools to make your post more readable. Use `code blocking` for code and log and error texts and **bold** and *italics* to highlight things – Morse Jul 06 '18 at 01:51
  • _Can we use nfs volume plugin to maintain the High Availability and Disaster Recovery among the kubernetes cluster?_ You'll want to be very careful using NFS with "databases" -- and I know mongo only loosely qualifies as a "database," but my point stands. You'll want to Run Like The Wind™ away from using EFS as NFS, if that applies to you. – mdaniel Jul 06 '18 at 04:29

5 Answers5

16

chown: changing ownership of '/data/db': Operation not permitted .

You'll want to either launch the mongo container as root, so that you can chown the directory, or if the image prohibits it (as some images already have a USER mongo clause that prohibits the container from escalating privileges back up to root), then one of two things: supersede the user with a securityContext stanza in containers: or use an initContainer: to preemptively change the target folder to be the mongo UID:

Approach #1:

containers:
- name: mongo
  image: mongo:something
  securityContext:
    runAsUser: 0

(which may require altering your cluster's config to permit such a thing to appear in a PodSpec)

Approach #2 (which is the one I use with Elasticsearch images):

initContainers:
- name: chmod-er
  image: busybox:latest
  command:
  - /bin/chown
  - -R
  - "1000"  # or whatever the mongo UID is, use string "1000" not 1000 due to yaml
  - /data/db
  volumeMounts:
  - name: mongo-data  # or whatever
    mountPath: /data/db
containers:
- name: mongo  # then run your container as before
mdaniel
  • 31,240
  • 5
  • 55
  • 58
  • 1
    can you share me the yaml file – dinosaur Oct 17 '20 at 10:51
  • If I set rusAsUser: 0. It says must be in the range of 1000570000 - 1000579999. When I set to 1000570000 . DBException in initAndListen, terminating","attr":{"error":"IllegalOperation: Attempted to create a lock file on a read-only directory: /data/db"} When command CHOWN as seen above.. then it says : starting container process caused "exec: \"sudo chown -R mongodb:mongodb /data/db\": stat sudo chown -R mongodb:mongodb /data/db: no such file or directory". – Sanjeev Oct 23 '20 at 12:43
  • I would set the securityContext to a non root user (e.g. [999](https://github.com/docker-library/mongo/blob/master/4.2/Dockerfile#L11-L12)) so that the [chown of docker-entrypoint.sh](https://github.com/docker-library/mongo/blob/master/4.2/docker-entrypoint.sh#L14) will not be executed – Born78 Dec 21 '22 at 08:29
5

/data/db is a mountpoint, even if you don't explicitly mount a volume there. The data is persisted to an overlay specific to the pod. Kubernetes mounts all volumes as 0755 root.root, regardless of what the permissions for the directory were intially. Of course mongo cannot chown that.

If you mount the volume somewhere below /data/db, you will get the same error.

And if you mount the volume above at /data, the data will not be stored on the NFS because the mountpoint at /data/db will write to the overlay instead. But you won't get that error anymore.

3

By adding command:["mongod"] in your Deployment Manifest, it will override the default entrypoint script and will prevent executing the chown.

...
    spec:
      containers:
      - name: mongodb
        image: mongo:4.4.0-bionic
        command: ["mongod"]
...
Fulvio
  • 925
  • 2
  • 13
  • 21
  • 1
    This saved my life today. Thank you. It works. However, you have to use "command: ["mongod", "--bind_ip", "0.0.0.0"]" if you want to be able to reach the database from another pod via a service or something. – Nick Jul 25 '23 at 12:17
2

i tested all this options, but any of them worked for me. My alternative was change the owner of the folder on the server nfs, with user and group 999:999 and after that, the deploy start to work.

my kubernetes use nfs-subdir-external-provisioner, so the pv is created automatically.

on my nfs server ==> chown -R 999:999 /export-path/data

1

Instead of mounting /data/db, we could mount /data. Internally mongo will create /data/db. During entrypoint, mongo tries to chown this directory but if we mount a volume directory to this mount point, as a mongo container user - it will not be able to chown. That's the cause of the issue

Here is a sample of working mongo deployment yaml


...
    spec:
      containers:
      - name: mongo
        image: mongo:latest
        volumeMounts:
        - mountPath: /data
          name: mongo-db-volume
      volumes:
      - hostPath:
          path: /Users/name/mongo-data
          type: Directory
        name: mongo-db-volume
...
Sairam Krish
  • 10,158
  • 3
  • 55
  • 67