25

I have the following Dockerfile for a php runtime based on the official [php][1] image.

FROM php:fpm
WORKDIR /var/www/root/
RUN apt-get update && apt-get install -y \
        libfreetype6-dev \
        libjpeg62-turbo-dev \
        libmcrypt-dev \
        libpng12-dev \
        zip \
        unzip \
    && docker-php-ext-install -j$(nproc) iconv mcrypt \
    && docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
    && docker-php-ext-install -j$(nproc) gd \
    && docker-php-ext-install mysqli \
    && docker-php-ext-enable opcache \
    && php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" \
    && php -r "if (hash_file('SHA384', 'composer-setup.php') === '669656bab3166a7aff8a7506b8cb2d1c292f042046c5a994c43155c0be6190fa0355160742ab2e1c88d40d5be660b410') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;" \
    && php composer-setup.php \
    && php -r "unlink('composer-setup.php');" \
    && mv composer.phar /usr/local/bin/composer

I am having trouble running composer install.

I am guessing that the Dockerfile runs before a volume is mounted because I receive a composer.json file not found error if adding:

...
&& mv composer.phar /usr/local/bin/composer \
&& composer install

to the above.

But, adding the following property to docker-compose.yml:

command: sh -c "composer install && composer require drush/drush"

seems to terminate the container after the command finishes executing.

Is there a way to:

  • wait for a volume to become mounted
  • run composer install using the mounted composer.json file
  • have the container keep running afters

?

ccpizza
  • 28,968
  • 18
  • 162
  • 169
Raphael Rafatpanah
  • 19,082
  • 25
  • 92
  • 158
  • 1
    Dude remove that hash check as it changes very often, with every new version of the Composer. By the way, you want to have your custom entry point that will do the install when the container has started. Also, you should move your application data to a data only container so you can split responsibilities. Then your application can be deployed independently from the web server's infrastructure. Good luck – Mike Doe Jul 01 '17 at 20:22
  • @mike would you be able to provide an example of using a data only container and how it could be used with docker-compose? Or, provide a resource that you think describes this concept well? Or both? :-) cheers – Raphael Rafatpanah Jul 02 '17 at 12:09
  • 1
    There are a lot of examples to be found on the net, just look for `data only container` or `docker persistence strategy`. What you do is simply build your app (composer install / docs / assets etc), then get smallest image possible (144 bytes image available!) and copy everything to it. Mount that data to apache's root in the php image using the `volumes_from` directive in the docker-compose. All that is possible with any CI/CD system like Jenkins, Gitlab etc. – Mike Doe Jul 02 '17 at 12:42

4 Answers4

13

I generally agree with Chris's answer for local development. I am going to offer something that combines with a recent Docker feature that may set a path for doing both local development and eventual production deployment with the same image.

Let's first start with the image that we can build in a manner that can be used for either local development or deployment somewhere that contains the code and dependencies. In the latest Docker version (17.05) there is a new multi-stage build feature that we can take advantage of. In this case we can first install all your Composer dependencies to a folder in the build context and then later copy them to the final image without needing to add Composer to the final image. This might look like:

FROM composer as composer
COPY . /app
RUN composer install --ignore-platform-reqs --no-scripts

FROM php:fpm
WORKDIR /var/www/root/
RUN apt-get update && apt-get install -y \
        libfreetype6-dev \
        libjpeg62-turbo-dev \
        libmcrypt-dev \
        libpng12-dev \
        zip \
        unzip \
    && docker-php-ext-install -j$(nproc) iconv mcrypt \
    && docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
    && docker-php-ext-install -j$(nproc) gd \
    && docker-php-ext-install mysqli \
    && docker-php-ext-enable opcache
COPY . /var/www/root
COPY --from=composer /app/vendor /var/www/root/vendor

This removes all of Composer from the application image itself and instead uses the first stage to install the dependencies in another context and copy them over to the final image.

Now, during development you have some options. Based on your docker-compose.yml command it sounds like you are mounting the application into the container as .:/var/www/root. You could add a composer service to your docker-compose.yml similar to my example at https://gist.github.com/andyshinn/e2c428f2cd234b718239. Here, you just do docker-compose run --rm composer install when you need to update dependencies locally (this keeps the dependencies build inside the container which could matter for native compiled extensions, especially if you are deploying as containers and developing on Windows or Mac).

The other option is to just do something similar to what Chris has already suggested, and use the official Composer image to update and manage dependencies when needed. I've done something like this locally before where I had private dependencies on GitHub which required SSH authentication:

docker run --rm --interactive --tty \
            --volume $PWD:/app:rw,cached \
            --volume $SSH_AUTH_SOCK:/ssh-auth.sock \
            --env SSH_AUTH_SOCK=/ssh-auth.sock \
            --volume $COMPOSER_HOME:/composer \
            composer:1.4 install --ignore-platform-reqs --no-scripts

To recap, the reasoning for this method of building the image and installing Composer dependencies using an external container / service:

  • Platform specific dependencies will be built correctly for the container (Linux architecture vs Windows or Mac).
  • No Composer or PHP is required on your local computer (it is all contained inside Docker and Docker Compose).
  • The initial image you built is runnable and deployable without needing to mount code into it. In development, you are just overriding the /var/www/root folder with a local volume.
ccpizza
  • 28,968
  • 18
  • 162
  • 169
Andy Shinn
  • 26,561
  • 8
  • 75
  • 93
  • 1
    The sentence "In this case we can first install all your Composer dependencies to a folder in the build context and then later copy them to the final image without needing to add Composer to the final image." was an eye-opener for me. Didn't get why the /vendor folder wasn't showing up in my final image, but with the "COPY --from=composer /app/vendor /var/www/root/vendor" command I'm up and running! Thanks a lot @Andy Shinn!! – Dennis Ameling Mar 13 '18 at 10:12
10

I've been down this rabbit hole for 5 hours, all of the solutions out there are way too complicated. The easiest solution is to exclude vendor or node_modules and similar directories from volume.

#docker-compose.yml
volumes:
      - .:/srv/app/
      - /srv/app/vendor/

So this will map current project directory but exclude its vendor subdirectory. Dont forget the trailing slash!

So now you can easily run composer install in dockerfile and when docker mounts your volume it will ignore vendor directory.

Buksy
  • 11,571
  • 9
  • 62
  • 69
  • I would like opposite. Or addition to this. To use vendor from build as a starting point for future development with docker-compose. I see an option to copy vendor from build to local before docker-compose up. Any other option maybe? – Vladimir Vukanac Jun 11 '20 at 18:17
3

If this is is for a general development environment, then the intention is not really ideal because it's coupling the application to the Docker configuration.

Just run composer install seperately by some other means (there is an image available for this on dockerhub, which allows you to just do (docker run -it --rm -v $(pwd):/app composer/composer install).


But yes it is possible you would need the last line in the Dockerfile to be bash -c "composer install && php-fpm".


  • wait for a volume to become mounted

No, volumes are not able to be mounted during a docker build process. Though you can copy the source code in.

  • run composer install using the mounted composer.json file

No, see above response.

  • have the container keep running after

Yes, you would need to execute php-fpm --nodaemonize ( which is a long running process, hence it won't terminate.

Chris Stryczynski
  • 30,145
  • 48
  • 175
  • 286
  • 1
    You could argue that everything `composer install` adds _are_dependencies of the application that should be contained in the image. Doing this externally opens up the possibility of these dependencies missing or being mis-matched during deployment or between environments. – Andy Shinn Jul 01 '17 at 20:11
  • Good point. In that case the source code would have to be added in as well, which does not sound like the intention of the OP. – Chris Stryczynski Jul 01 '17 at 20:15
  • I agree. I think for development your answer does make sense and I like that you modified it to be more of the "Docker way" by using the `composer` image. I am also trying to think ahead for when you might eventually deploy the application to a production environment, in which case the answer could be different. – Andy Shinn Jul 01 '17 at 20:19
0

To execute a command after you have mounted a volume on a docker container

Assuming that you are fetching dependencies from a public repo

docker run --interactive -t --privileged --volume ${pwd}:/xyz composer /bin/sh -c 'composer install'

For fetching dependencies from a private git repo, you would need to copy/create ssh keys, I guess that should be out of scope of this question.

rizways
  • 319
  • 2
  • 5