2

TL;DR: Getting error /docker-entrypoint.sh: line 10: exec: nginx: cannot execute: Is a directory running docker run username/srdc_prod:2020.07.26 when docker-compose up works just fine.

I have a docker container that I have pushed to DockerHub. I am able to run this locally with docker-compose up. I pulled this down on my remote server and now I am trying to use docker run username/srdc_prod:2020.07.26.

When I do this I get the error /docker-entrypoint.sh: line 10: exec: nginx: cannot execute: Is a directory. Executing docker-compose up works just fine locally. I want to push this image to Docker Hub and pull this down and run it on my remote server. However, I cannot use docker-compose up because the docker-compose.yaml file does not exist remotely. I realize I could try cloning my GitHub repo remotely and then run docker-compose up but this defeats the purpose of pushing the container to DockerHub and eventually getting CI/CD set up.

Is there a way to run docker-compose up on a container? Or is it possible to see all of the commands executed? I saw in this question that I can use docker ps --no-trunc to see all of the commands executed for running docker containers, but all of these rely on the docker-entrypoint.sh file with which I cannot seem to execute remotely. Am I doing this all wrong and I need to refactor my Dockerfile to run on the remote server?

Image on Remote Server

username@ubuntu-512mb-name:~$ docker image ls
WARNING: Error loading config file: /home/username/.docker/config.json: stat /home/username/.docker/config.json: permission denied
REPOSITORY                TAG                 IMAGE ID            CREATED             SIZE
username/srdc_prod        2020.07.26          af62927882ee        5 days ago          504MB

docker-entrypoint.sh

  1 #!/bin/bash
  2
  3 echo "Collect static files"
  4 python manage.py collectstatic --noinput
  5
  6 echo "Apply database migrations"
  7 python manage.py migrate --noinput
  8
  9 echo "Starting daphne server"
 10 exec "$@" # offending line

Dockerfile

FROM python:3-alpine

# set env vars
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

RUN mkdir /app

RUN apk update && apk add --no-cache postgresql-dev gcc libffi-dev musl-dev build-base python3-dev bash

COPY requirements.txt /app/requirements.txt
RUN pip install --upgrade pip && pip install --no-cache-dir -r /app/requirements.txt

COPY test-requirements.txt /app/test-requirements.txt
RUN pip install --upgrade pip && pip install --no-cache-dir -r /app/test-requirements.txt

COPY . /app
# set working directory
WORKDIR /app

RUN mkdir -p /var/www/srdc/static/
RUN chmod 755 /var/www/srdc/static/

EXPOSE 8000

ENV DJANGO_SETTINGS_MODULE=srdc.settings

CMD ["nginx", "-g", "daemon off;"]

ADD docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod a+x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]

docker-compose.yml

version: '3'

services:
  django_web:
    build: .
    command: bash -c "daphne -b 0.0.0.0 -p 8000 srdc.asgi:application"
    expose:
       - 8000
    image: srdc:v0
    container_name: srdc_django_web
    volumes:
      - .:/app
      - static_volume:/var/www/srdc/static/
    depends_on:
      - db
  nginx:
    build: ./nginx
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - static_volume:/var/www/srdc/static/
      - ./data/certbot/conf:/etc/letsencrypt
      - ./data/certbot/www:/var/www/certbot
    depends_on:
      - django_web
  db:
    image: postgres
    container_name: "my_postgres"
    ports:
      - "54320:5432"
    volumes:                                                                                                                                                                                     - my_dbdata:/var/lib/postgresql/data
volumes:                                                                                                                                                                                       my_dbdata:
    static_volume:                                                                                                                                                                             media_volume:
Scott Skiles
  • 3,647
  • 6
  • 40
  • 64
  • Maybe I'm missing something, but in your Dockerfile you don't appear to be installing nginx, yet your `CMD` is `nginx -g ...`. Should that command be `daphne ...`? – benbotto Aug 01 '20 at 14:25
  • There is a directory called 'nginx' that has a Dockerfile. I should probably include that. – Scott Skiles Aug 03 '20 at 17:43
  • But that's the `Dockerfile` for the `nginx` service, right? Your question appears to be about the `django_web` service, based on the image you're running (`srdc_prod`), which doesn't need `nginx`'s `Dockerfile`. You have three containers listed, and it looks like you're running the `nginx` command in the wrong one. – benbotto Aug 03 '20 at 18:37

1 Answers1

1

The error you're seeing is accurate: nginx is a directory. Based on your docker-compose.yml manifset, there is an nginx folder in your root directory which you use as a build context for your nginx service.

  nginx:
    build: ./nginx # Here's the evidence for the nginx folder.
    ports:

When you build the django_web image, you copy over the entire context directory into /app, and that includes the nginx directory.

COPY . /app
# set working directory
WORKDIR /app

The CMD for your username/srdc_prod image is nginx -g daemon off;, which your docker-entrypoint.sh executes. That fails because nginx is a directory.

Based on your docker-compose.yml manifest, it looks like the CMD you actually want is daphne -b 0.0.0.0 -p 8000 srdc.asgi:application or something like that, but not nginx which is not installed in that alpine-based image.

Some recommendations outside of the scope of the question.

  1. If you're using docker-compose in dev, consider using that in your hosted environments, too, instead of running the raw docker run commands.
  2. Better yet, use Docker in swarm mode. You can re-use your manifest file that way, albeit you would need to remove some of the deprecated stuff (depends_on, for example) and expand on the service definitions a bit. Using swarm mode--or some other orchestration tool--will make it easier to scale your service in the future, plus you get some other handy features like secrets, restart policies, network segregation, and so on.
benbotto
  • 2,291
  • 1
  • 20
  • 32
  • Thank you. Can you elaborate on this `If you're using docker-compose in dev, consider using that in your hosted environments, too, instead of running the raw docker run commands.`? I did not see an easy way to do this without cloning the repo onto the remote server. I'll look into swarm mode. Thanks! – Scott Skiles Aug 01 '20 at 17:50
  • Generally you don't need to put anything on the remote server except for a docker-compose.yml file, and maybe some env files for configuration. All of the images should exist in the cloud in some container registry, and each image should have everything it needs to run (i.e. all the application code and assets and whatnot). In the YAML file you shared, the django container mounts the `./app` directory, but in the built image `app` is copied into the container. As such, you should not need to clone the repo on the remote server. – benbotto Aug 03 '20 at 14:09
  • So in this setup I need to maintain two separate docker-compose.yaml files? Do you have any links you can point me to run docker-compose in prod? Thanks! – Scott Skiles Aug 14 '20 at 20:17
  • I don't have any links, sorry. I use Docker Swarm in prod, QA, and UAT. I personally only use a single docker-compose.yml file across all environments, but I have separate environment files for each env. IMO, the service definitions should be identical across all environments, and things that change from env to env should be configurable using environment variables. – benbotto Aug 14 '20 at 22:56