Kind of new with Docker in production I have a design question. What is the best approach to deploy with Docker a PHP app that contains data and info used by other containers in the main application directory that will have to be updated over the builds ?
Example (simplify Symfony kind of app):
- application
- app
- src
- vendor
- conf
- conf/vhost.conf
- web/assets/*
And let's simplify with only 2 services
- php-fpm
- nginx
1/ The first try was to build 2 images
- php-fpm: with
ADD . /var/www/html/project/
VOLUME /var/www/html/project/
vendors(composer) install in the Dockerfile
That way I was able to reach /var/www/html/project/ on nginx
volumes_from php-fpm
=> And then the configuration and the assets etc.
But if I am not wrong that is not the good way to do it because in the next build my image won't update the VOLUME /var/www/html/project/ (because it is a volume) => then my code will never be updated.
2/ Then I ended up doing that:
- providing the last code base in the image: COPY . /data/image/app
- creating a named volume: docroot
- mount docroot on php-fpm
- adding a rsync on the entrypoint to sync /data/image/app to docroot:/var/www/html/project ( with the good excludes that I needed)
- doing the vendors(composer) install in the entrypoint
=> still using volumes_from:php-fpm on nginx.
Which is important because I want:
- the conf/vhost.conf
- the assets
- maybe other stuff
I may need to add a SolR that will use some configuration files and/or resources etc.
3/ I suppose there is another approach that would consist to specifically ADD what I need on each image.
I think it adds complexity to the build process but it makes also a lot of sense.
So, what do you think, have I missed something? approach 2/ or 3/, 4?
Thank you very much!