You're confusing build time (basically RUN
instructions) with runtime (ENTRYPOINT
or CMD) and,
after that, you're breaking the rule: one container, one process, even this is not a sacred one.
My suggestion is to use Supervisord with this configuration
[unix_http_server]
file=/tmp/supervisor.sock ; path to your socket file
[supervisord]
logfile=/var/log/supervisord/supervisord.log ; supervisord log file
loglevel=error ; info, debug, warn, trace
pidfile=/var/run/supervisord.pid ; pidfile location
nodaemon=false ; run supervisord as a daemon
minfds=1024 ; number of startup file descriptors
minprocs=200 ; number of process descriptors
user=root ; default user
childlogdir=/var/log/supervisord/ ; where child log files will live
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket
[program:npm]
command=npm run --prefix /path/to/app start
stderr_logfile = /dev/stdout
stdout_logfile = /dev/stderr
[program:nginx]
command=nginx -g "daemon off;"
stderr_logfile = /dev/stdout
stdout_logfile = /dev/stderr
With this configuration you will have logs redirected to Standard Output and this is a good
practice instead of files inside container that could be ephemeral, also you will have a
PID responsible for handling child processes and restart them with specific rules.
You could try to achieve this also with a bash script but could be tricky.
Another best solution should be using separated container with network namespace in order to
forward NGINX requests to the NPM upstream... but without Kubernetes it could be hardly to
maintain, even it's not impossible just with Docker :)