0

Given a pod running an Nginx container, and a PHP-FPM container, what would be the best practice for the applications document permissions?

At the moment I have a volume shared between the containers so that Nginx has access to the PHP files. This works, but the files are owned by the user www-data in the FPM container, which does not exist in the Nginx container, resulting in them being owned by whichever user has the same UID.

This is obviously wrong, but then what's right? Options I've considered so far:

  • Files are owned by nobody:nogroup
  • Make a copy of the files for Nginx, and assign ownership to the nginx user in that container
  • Align the UIDs
  • Run both Nginx and FPM in the same container

None of these seem appealing.

Afraz
  • 795
  • 1
  • 6
  • 20

1 Answers1

2

This is a case for Security Context in kubernetes, where you can specify uid,gid or supplementary gid (fsgroup) for your pods.

For example setting:

spec:
  securityContext:
    runAsUser: 1000
    runAsGroup: 3000
    fsGroup: 2000

Will make you progress run as user 1000, with primary group 3000 and supplementary group 2000.

You haven't specified if you need both pods to edit files in that volume, if that's not the case - only adding fsGroup should be enough to give read-only rights (by default) to your files, without affecting your existing workloads in any meaningful way.

Otherwise you can force same UID's, but that might require you to reconfigure your applications

See also: Kubernetes: how to correctly set php-fpm and nginx shared volume permission

Andrew
  • 3,912
  • 17
  • 28
  • I looked at `securityContext`, but since there are no users in either container that have UID 1000, wouldn't that effectively be the same as running as `nobody`? – Afraz Jan 18 '22 at 09:28
  • You don't need to use `runAsUser`, as i specified in my answer, `fsGroup` would be enough, it will make all files in the volumes to be owned by group 3000 in my example, and make both pods run with extra 3000 group, thus granting the read access (if your file mask allows it) – Andrew Jan 18 '22 at 09:32
  • I should have read the docs better, you're right, this is exactly what I needed. Well, there's a single directory in there that also needs write access on one of the containers, but I'm sure I can fix that. – Afraz Jan 18 '22 at 09:40
  • Turns out I _can't_ fix it easily :/ I read that all processes are part of the `fsGroup` GID, but giving a directory write permissions by that group still doesn't allow writing by the container's process. I could use `runAsGroup`, but it I'd rather avoid that, and it sounds like I shouldn't have to anyway? – Afraz Jan 18 '22 at 11:49
  • Set `fsGroup` to same id as the running uid of your first pod, this way your first pod will be able to write to that directory, and secnd pod will be able to read. You can even mount volume to the second pod as read-only, just to be sure that only one pod can actually write anything to volume. Check dockerfile, or exec to that pod to find out which user id is userd by your processes – Andrew Jan 18 '22 at 11:54
  • Ha, I'm right back where I started from. Setting `fsGroup` to the same UID as the first containers running UID means that the files are group owned by a non-existant GID on the second container. I'm beginning to think that aligning at least the GID is the only option :/ – Afraz Jan 18 '22 at 13:23
  • What kind of storage Provisioner are you using? Dows it properly support groups? Have you tried to specifiy fsGroup in both pods, or you only did it in the second pod? If the latter, Kubernetes won't chown files to a proper group – Andrew Jan 18 '22 at 15:01
  • It's just an `emptyDir`. I tried a few combinations including supplying `fsGroup` in the containers individually. – Afraz Jan 18 '22 at 15:54