-1

I am trying to run multiple js files in a bash script like this. This doesn't work. The container comes up but doesn't run the script. However when I ssh to the container and run this script, the script runs fine and the node service comes up. Can anyone tell me what am I doing wrong?

Dockerfile

FROM node:8.16

MAINTAINER Vivek

WORKDIR /a

ADD . /a
RUN cd /a && npm install

CMD ["./node.sh"]

Script is as below

node.sh

#!/bin/bash

set -e

node /a/b/c/d.js &

node /a/b/c/e.js &
jonrsharpe
  • 115,751
  • 26
  • 228
  • 437
user6904665
  • 15
  • 1
  • 5
  • 3
    If you put both processes in the background with `&` in the entrypoint script, the script will do that and then exit. You need a foreground process. You should leave one in the foreground, use `wait`, use supervisord, etc. See https://docs.docker.com/config/containers/multi-service_container/ – chash May 05 '20 at 18:25
  • Thanks. I have removed & from my last command to let the process not exit. – user6904665 May 06 '20 at 04:44

2 Answers2

1

As @hmm mentions your script might be run, but your container is not waiting for your two sub-processes to finish.

You could change your node.sh to:

#!/bin/bash

set -e

node /a/b/c/d.js &
pid1=$!

node /a/b/c/e.js &
pid2=$!

wait pid1
wait pid2

Checkout https://stackoverflow.com/a/356154/1086545 for a more general solution of waiting for sub-processes to finish.

As @DavidMaze is mentioning, a container should generally run one "service". It is of course up to you to decide what constitutes a service in your system. As described officially by docker:

It is generally recommended that you separate areas of concern by using one service per container. That service may fork into multiple processes (for example, Apache web server starts multiple worker processes). It’s ok to have multiple processes, but to get the most benefit out of Docker, avoid one container being responsible for multiple aspects of your overall application.

See https://docs.docker.com/config/containers/multi-service_container/ for more details.

abondoa
  • 1,613
  • 13
  • 23
1

Typically you should run only a single process in a container. However, you can run any number of containers from a single image, and it's easy to set the command a container will run when you start it up.

Set the image's CMD to whatever you think the most common path will be:

CMD ["node", "b/c/d.js"]

If you're using Docker Compose for this, you can specify build: . for both containers, but in the second container, specify an alternate command:.

version: '3'
services:
  node-d:
    build: .
  node-e:
    build: .
    command: node b/c/e.js

Using bare docker run you can specify an alternate command after the image name

docker build -t me/node-app .
docker run -d --name node-d me/node-app
docker run -d --name node-e me/node-app \
  node b/c/e.js

This lets you do things like independently set restart policies for each container; if you run this in a clustered environment like Docker Swarm or Kubernetes, you can independently scale the two containers/pods/processes as well.

David Maze
  • 130,717
  • 29
  • 175
  • 215