0

I have a command that calls a local docker container server.

I use docker-compose run name_of_service /bin/bash to exec into an image and from there calling command below works as expected.

pip install --trusted-host pypi --extra-index-url http://pypi:8000 -r requirements.txt

But running virtually the same command in Dockerfile results in a Retrying error

RUN pip install --trusted-host pypi --extra-index-url http://pypi:8000 -r requirements.txt --user
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPConnection object at 0x7f54bac2dad0>: Failed to establish a new connection: [Errno -2] Name or service not known')': /custom-utils/

Both services are in one docker-compose.yml

Yaml

  service:
    image: service:20.10.1
    build:
      context: platform
      dockerfile: service/Dockerfile
    depends_on:
      - api
      - pypi
    environment:
      PORT: "8088"
    ports:
      - "8088:8088"
    volumes:
      - some_location_of_source
    restart: always

  pypi:
    image: pypi:20.10.1
    build:
      context: services/pypi
      dockerfile: Dockerfile
    environment:
      PORT: "8000"
    expose:
      - "8000"
    ports:
      - "8000:8000"
    volumes:
      - some_location_of_source
ArcLight_Slavik
  • 113
  • 1
  • 10

1 Answers1

1

Dockerfile RUN instructions can never make network calls to other services, even in the same docker-compose.yml file. You need to arrange for the package server to run "somewhere else" (even in Docker but launched separately might work).

At a technical level there are two issues. Compose broadly gets to assume all image builds happen before any containers are launched, so there's no way to require the pypi service to start before the service image is built (depends_on: doesn't affect the build stage). Image builds also aren't attached to the Docker network that Compose creates, so they can't do things like resolve container hostnames; that will lead to the specific error you're getting.

It might work to split this into two separate Compose YAML files, one for the package server and one for the main service. You can launch the package server; then docker-compose build the main service; then stop the package server. Since you have published ports: you can reach the package server through one of the host's IP addresses; or if you're on a MacOS or Windows host, the special host name host.docker.internal; or otherwise use one of the techniques described in From inside of a Docker container, how do I connect to the localhost of the machine?.

RUN pip install \
  --extra-index-url http://host.docker.internal:8000 \
  -r requirements.txt

(Depending on what exactly is in this package server, you may not need it at all. If you python setup.py bdist_wheel or pip wheel the dependencies you keep there, you can COPY the resulting .whl files into your image and install them directly. If it's all from the same source tree then a multi-stage build where earlier stages just build libraries could work too.)

David Maze
  • 130,717
  • 29
  • 175
  • 215
  • Thanks for such a detailed answer! My plan is to have this service deployed to a website and have that be called, but I haven't gotten it yet so was doing it locally. About COPYing wheels, there's a command to upload a wheel to a remote repository which I find easier than moving them to the server or doing that inside of the package. – ArcLight_Slavik Oct 27 '20 at 07:56