2

I have just started using Docker as it has been recommended to me as something that makes development easy, but so far it has been nothing but pain. I have installed docker engine (v20.10.12) and docker composer (v 2.2.3) as per the documentation given by docker for Ubuntu OS. Both work as intended.

Whenever I new up a new container with docker compose, no matter the source, I have writing privilege issues to files generated by the docker container (for example a laravel application where I have used php artisan to create a controller file). I have so far pinpointed the issue to be as follows:

By default docker runs as root within the container. It "bridges" the root user to the root user on the local machine and uses root:root to create files on the Ubuntu filesystem (my workspace is placed in ~/workspace/laravel). Then when opening the files in an IDE (vscode in this instance) I get the error:

Failed to save to '<file_name>': insufficient permissions. Select 'Retry as Sudo' to retry as superuser

If I try to parse my own local user into the machine and tells it to use that specific userid and groupid it's all good when I'm using the first user created on the machine (1000:1000) since that will match with the containers default user if we look at the bitnami/laravel docker image for example.

All of this can be fixed by running chown -R yadayada . on the workspace directory every time I use php artisan to create a file. But I do not think this is sustainable or smart in any way shape or form.

How can I tell my docker container to, on startup, to check if a user with my UID and GID exists and if not, then make a user with that id and assign it as a system user?

My docker-compose.yml for this example

version: '3.8'

services:
  api_php-database:
    image: postgres
    container_name: api_php-database
    restart: unless-stopped
    environment:
      POSTGRES_PASSWORD: secret
      POSTGRES_DB: laravel_docker
    volumes:
      - ./postgres-data:/var/lib/postgresql/data
    ports:
      - '5432:5432'
  api_php-apache:
    container_name: api_php-apache
    build:
      context: ./php
    ports:
      - '8080:80'
    volumes:
      - ./src:/var/www/laravel_docker
      - ./apache/default.conf:/etc/apache2/sites-enabled/000-default.conf
    depends_on:
      - api_php-database

My Dockerfile for this example

FROM php:8.0-apache

RUN apt update && apt install -y g++ libicu-dev libpq-dev libzip-dev zip zlib1g-dev && docker-php-ext-install intl opcache pdo pdo_pgsql pgsql

WORKDIR /var/www/laravel_docker

RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
IncrediblePony
  • 645
  • 9
  • 31
  • 1
    One of Docker's major design points is that the host and container filesystems are supposed to be separate; writes inside the container can't normally affect the host filesystem, the host and container have separate username spaces, and so on. You need to work around all of these features to use Docker as a live development environment; using a language interpreter directly on your host outside a container will almost always be easier. – David Maze Mar 09 '22 at 15:54
  • I see your point, but what is the gain of Docker if you have to install - literally - everything as a mirror on your own machine. The whole idea for me is that you can run a container where you can execute commands through and you don't have to configure local machines except from Docker engine and Docker compose? – IncrediblePony Mar 10 '22 at 07:49
  • 1
    Can you do something with the `-v` flag (see [here](https://stackoverflow.com/a/32270232/13020139))? With the two argument version, you could have Docker mount a directory from your host user. Then Docker would have its own configuration but any modifications to that directory persist past the life of the container. No need to "install" or copy an entire directory into the container as a mirror. – wxz Mar 14 '22 at 16:07
  • 1
    Are you able to simply run the containers not as root? If you add your user account to the system group `docker`, you should be able to run the container without sudo, and all the writes will just be your user – Carson Mar 15 '22 at 11:43
  • @wxz - hmm.. might work, I have not tried binding the volumes of the user/passwd/group volumes and keeping them in sync. I'm not sure that's a good idea in general or if you use images from 3rd parties. I'll have a look. – IncrediblePony Mar 15 '22 at 19:37
  • @Carson - Even though I run the docker container as non sudo, it still creates files with `php artisan` as root:root in both the container and my host machine. This is my problem. The user that is specified in the `docker-compose.yml` or the `docker exec -it -u myuser my_app bash` is the one creating the files in the container. So if I try to assign my user to the container and it doesn't exist, it just defaults back to root:root, which leaves me at square one. – IncrediblePony Mar 15 '22 at 19:40
  • 1
    @IncrediblePony You wouldn't have to worry about "keeping them in sync" with binding volumes because you'd be directly modifying the host files. – wxz Mar 15 '22 at 20:02

1 Answers1

1

In general, this is not possible, but there are workarounds (I do not recommend them for production). The superuser UID is always 0, this is written in the kernel code. It is not possible to automatically change the ownership of non-root files. In this case, when developing, you can use these methods:

If superuser rights are not required: You can create users dynamically, then docker-compose.yml:

version: "3.0"
services:
  something:
    image: example-image
    volumes:
      - /user/path1:/container/path1
      - /user/path2:/container/path2
    # The double $ is needed to indicate that the variable is in the container
    command: ["bash", "-c", "chown -R $$HOST_UID:$$HOST_GID /container/path1 /container/path2; useradd -g $$HOST_GID -u $$HOST_UID user; su -s /bin/bash user"]
    environment:
      HOST_GID: 100
      HOST_UID: 1000

Otherwise, if you run commands in a container as root in Bash: Bash will run the script from the PROMPT_COMMAND variable after each command is executed This can be used in development by changing docker-compose.yaml:

version: "3.0"
services:
  something:
    image: example-image
    volumes:
      - /user/path1:/container/path1
      - /user/path2:/container/path2
    command: ["bash"]
    environment:
      HOST_UID: 1000
      HOST_GID: 100
      # The double $ is needed to indicate that the variable is in the container
      PROMPT_COMMAND: "chown $$HOST_UID:$$HOST_GID -R /container/path1 /container/path2"
qqNade
  • 1,942
  • 8
  • 10
  • Just so everyone is clear on this - I am not intending to use this in production :) I'm meerly working in an environment where Docker is being implemented as the common denominator for the devs and we experienced these issues with Linux but not macOS or Windows :) – IncrediblePony Mar 21 '22 at 09:59