-1

Here's the Dockerfile:

FROM nginx:stable-alpine

COPY ./mailservice /var/www/backend
COPY ./dist /usr/share/nginx/html
COPY ./docker/nginx_config/default.conf /etc/nginx/conf.d/default.conf
COPY ./docker/nginx_config/.htpasswd /etc/nginx

RUN chown -R nginx:nginx /usr/share/nginx/html/ \
    && chown -R nginx:nginx /etc/nginx/.htpasswd \
    && apk add --update nodejs nodejs-npm

WORKDIR /var/www/backend
RUN npm run start

EXPOSE 80

CMD ["nginx", "-g", "daemon off;"]

But my RUN npm run start doesn't work, i have to manually attach shell to container and then run this by my self. What's the correct way to launch npm run start after container is started?

UPDATE

CMD ["nginx", "-g", "daemon off;"]
ENTRYPOINT ["node", "server.js"]

Would this work?

Alexander Kim
  • 17,304
  • 23
  • 100
  • 157
  • The proper way is by using `ENTRYPOINT` and `CMD` because these are used at the point of starting a container. `RUN` is used to execute a command at the stage of building the image. – tgogos Dec 20 '18 at 18:44
  • You mean both - `ENTRYPOINT` and `CMD`? An example, please? – Alexander Kim Dec 20 '18 at 18:44
  • 1
    Check this: [Understand how CMD and ENTRYPOINT interact](https://docs.docker.com/engine/reference/builder/#understand-how-cmd-and-entrypoint-interact). – tgogos Dec 20 '18 at 18:46
  • @tgogos, updated my question, correct me, please. Now everything is broken... Still can't figure out. – Alexander Kim Dec 20 '18 at 18:53

4 Answers4

2

The best practice say that you shouldn't run more than one process per container. Unless your application its made in a way that starts multiples process from a unique entrypoint.

But there is some workaround that you can use. Try to check this question: Docker multiple entrypoints

Daniel Asanome
  • 478
  • 5
  • 7
1

Solved this way:

Dockerfile

FROM nginx:stable-alpine

COPY ./mailservice /var/www/backend
COPY ./dist /usr/share/nginx/html
COPY ./docker/nginx_config/default.conf /etc/nginx/conf.d/default.conf
COPY ./docker/nginx_config/.htpasswd /etc/nginx

RUN chown -R nginx:nginx /usr/share/nginx/html/ \
    && chown -R nginx:nginx /etc/nginx/.htpasswd \
    && apk add --update nodejs nodejs-npm

ADD ./docker/docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod 755 /docker-entrypoint.sh

EXPOSE 80

WORKDIR /

CMD ["/docker-entrypoint.sh"]

docker-entrypoint.sh

#!/usr/bin/env sh

exec node /var/www/backend/server.js > /var/log/node-server.log &
exec /usr/sbin/nginx -g "daemon off;"
Alexander Kim
  • 17,304
  • 23
  • 100
  • 157
  • 3
    If for whatever reason your Node backend crashes, Docker land won't notice it; the nginx process will keep running and spit out 503 errors until an operator notices and restarts the container. – David Maze Dec 20 '18 at 20:08
0

You're confusing build time (basically RUN instructions) with runtime (ENTRYPOINT or CMD) and, after that, you're breaking the rule: one container, one process, even this is not a sacred one.

My suggestion is to use Supervisord with this configuration

[unix_http_server]
file=/tmp/supervisor.sock                       ; path to your socket file

[supervisord]
logfile=/var/log/supervisord/supervisord.log    ; supervisord log file
loglevel=error                                  ; info, debug, warn, trace
pidfile=/var/run/supervisord.pid                ; pidfile location
nodaemon=false                                  ; run supervisord as a daemon
minfds=1024                                     ; number of startup file descriptors
minprocs=200                                    ; number of process descriptors
user=root                                       ; default user
childlogdir=/var/log/supervisord/               ; where child log files will live

[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface

[supervisorctl]
serverurl=unix:///tmp/supervisor.sock         ; use a unix:// URL  for a unix socket

[program:npm]
command=npm run --prefix /path/to/app start
stderr_logfile = /dev/stdout
stdout_logfile = /dev/stderr

[program:nginx]
command=nginx -g "daemon off;"
stderr_logfile = /dev/stdout
stdout_logfile = /dev/stderr

With this configuration you will have logs redirected to Standard Output and this is a good practice instead of files inside container that could be ephemeral, also you will have a PID responsible for handling child processes and restart them with specific rules.

You could try to achieve this also with a bash script but could be tricky.

Another best solution should be using separated container with network namespace in order to forward NGINX requests to the NPM upstream... but without Kubernetes it could be hardly to maintain, even it's not impossible just with Docker :)

prometherion
  • 2,119
  • 11
  • 22
0
  • Your current approach is fundamentally wrong by design.
  • Your current approach is a clear Anti-pattern of using containers
  • Please create a Dockerfile for your app
  • Please create a Dockerfile for nginx separately
  • Use docker-compose to build the stack or you can compose it your own way
  • Always run the app and proxy on separate containers
Ijaz Ahmad
  • 11,198
  • 9
  • 53
  • 73