1

Kind of new with Docker in production I have a design question. What is the best approach to deploy with Docker a PHP app that contains data and info used by other containers in the main application directory that will have to be updated over the builds ?

Example (simplify Symfony kind of app):

- application
    - app
    - src
    - vendor
    - conf
    - conf/vhost.conf
    - web/assets/*

And let's simplify with only 2 services

- php-fpm
- nginx

1/ The first try was to build 2 images

  • php-fpm: with

ADD . /var/www/html/project/

VOLUME /var/www/html/project/

vendors(composer) install in the Dockerfile

That way I was able to reach /var/www/html/project/ on nginx

volumes_from php-fpm

=> And then the configuration and the assets etc.

But if I am not wrong that is not the good way to do it because in the next build my image won't update the VOLUME /var/www/html/project/ (because it is a volume) => then my code will never be updated.

2/ Then I ended up doing that:

- providing the last code base in the image: COPY . /data/image/app
- creating a named volume: docroot
- mount docroot on php-fpm
- adding a rsync on the entrypoint to sync /data/image/app to docroot:/var/www/html/project ( with the good excludes that I needed)
- doing the vendors(composer) install in the entrypoint

=> still using volumes_from:php-fpm on nginx.

Which is important because I want:

- the conf/vhost.conf
- the assets
- maybe other stuff

I may need to add a SolR that will use some configuration files and/or resources etc.

3/ I suppose there is another approach that would consist to specifically ADD what I need on each image.

I think it adds complexity to the build process but it makes also a lot of sense.

So, what do you think, have I missed something? approach 2/ or 3/, 4?

Thank you very much!

Community
  • 1
  • 1
Plopix
  • 185
  • 1
  • 10

1 Answers1

1

Your question is really about static file assets. Some frameworks and projects treat these very much like their own component. In the php world, the php-fpm application server doesn't normally handle static files. It leaves that up to a webserver component like nginx. This, on its own, is not a problem. In fact it is good practice.

The only reason it's coming up is you are introducing an isolation layer between php-fpm and nginx.

If we consider another, non-docker, situation where an isolation is introduced between php-fpm and nginx, we get the same problem. In this example, I'll have my nginx server running in the DMZ of my datacenter, and php-fpm will act as the application server behind the firewall in my datacenter. How will that nginx serve up that php project's static files?

I mentioned that static assets could almost be considered their own component. In this two-node example, a separate step during the deploy could be utilized to populate the static files on the nginx server in the DMZ. This is not unlike your solution #2 where you run an rsync to populate a volume that both the php-fpm and nginx containers have access to.

Yet another solution would be to make php-fpm handle serving up these static assets. This, of course, is not considered a best practice since php-fpm is not built to serve static files. It can be done, but it's poorly optimized. This performance hit can be mitigated by using nginx file caching.

Your #3 solution is also quite viable. You could also have your project build two images instead of one. The first one would be your normal php image that has all your php code ADDed. The second would be based on the nginx base image, and ADD only the static files that the project needs.

Although it isn't considered best practice, it may be appropriate on some projects to run both nginx and php-fpm in the same container. If your nginx configuration is literally only serving up static files and reverse proxying to php-fpm, then you can treat that pair of processes as one logical service. You would need to run supervisord, runit, or a similar process manager.

programmerq
  • 6,262
  • 25
  • 40
  • Thx! Your analogy with the DMZ is interesting. I agree with you. #3 was planned with 2 images. I want to be able to scale php-fpm and not nginx. I see 2 things that are still not completely clear to make the decision. - is there any obvious pitfalls using the rsync way (with multiple containers, starting in the same time, or with a global volume(Swarm), or something else? - what about more than static files.Let’s say I have multiple services and each of them needs a part of the code base. I guess #3 is the cleanest path, but would #2 be a valid pragmatic approach in the Docker world? – Plopix Nov 02 '16 at 05:13
  • As far as running multiple copies of rsync, I don't believe that will cause any problems. If you are worried about multiple copies of rsync running, you can implement some simple lockfile logic as discussed here: http://stackoverflow.com/questions/9390134/rsync-cronjob-that-will-only-run-if-rsync-isnt-already-running There isn't a concept of a global volume when using swarm-- you can, however, make a volume be backed by some sort of network filesystem. See the NFS example here: https://docs.docker.com/engine/reference/commandline/volume_create/ – programmerq Nov 02 '16 at 15:42
  • I don't like the #2 approach at all, because your code is no longer a part of your Docker image. Your image should be self-contained. It should be the build artifact for your project. When you take the code out of the image, you're no longer distributing an image that can stand on its own. – programmerq Nov 02 '16 at 15:43