6

Lets assume I have the following docker-compose.yml file running two different python apps in parallel (e.g. via flask):

app1:
  command: python app.py
  build: app1/

app2:
  command: python app.py
  build: app2/
  links:
    - app1

app2 is linked to app1 since I want to get particular data from app1 within it. Now my problem is a certain scenario where I want to debug this link. I can easily debug app1 and app2 as standalone containers (via docker-compose run --service-ports ... python app.py and placing pdb somewhere in the code). My problem is when I want to debug app1 in case the request comes from app2. If I start app1 with docker-compose run, then app2 is not able to resolve the link. This issue becomes even more a problem with more apps/services "talking" to each other depending on their links.

Is there a good way to handle this? How do you approach the debug problem with linked containers in general (not necessarily python specifc)? Thanks for the input.

Torsten Engelbrecht
  • 13,318
  • 4
  • 46
  • 48
  • Doesn't it work if you start the `app2` service with service ports on? Like `docker-compose run --service-ports app2`. If not, you could start the `app1` service with service ports, grab it's container name, then start the second service, `app2`, manually linking it to the running `app1` (using `--link`). Or maybe you don't even need to provide linking, if `app1` is already running (with service ports), and it's the **only** instance of this service running, starting `app2` should connect to it automatically. – linkyndy May 25 '15 at 11:45
  • If I start both services with `--service-ports` flag, `app1` can be started but `app2` is not able to start up because it tries to start the linked container for `app1` seperately. If I use `--no-deps` then the link is not established. Your second solution would work, but its hard to automate. Also I would like to stay within `docker-compose` boundaries and don't use plain `docker` if not necessary. – Torsten Engelbrecht May 26 '15 at 08:31
  • I found one workaround which currently works for me. When starting each container for debugging I run them with `--service-ports` and `--no-deps` flag. In order for them to communicate I also need to pass in the host machines IP as an environment variable and expose the ports of each container. The host IP is automatically accessible from the each docker container, so I can make requests to there instead of making requests to the IP from the link. Also I need to start all the intermediate containers like this. Still, its only a workaround. I'd rather have a built-in solution. – Torsten Engelbrecht May 28 '15 at 08:58
  • This answer to a similar question solved this problem for me: http://stackoverflow.com/a/40449862/4311512 – Rupert Angermeier Jan 02 '17 at 13:24

1 Answers1

5

If you're doing development locally on the same machine then you can add a net: 'host' into your configuration which does the following:

Tells Docker to skip placing the container inside of a separate network stack. In essence, this choice tells Docker to not containerize the container's networking!

For more info see the documentation

app1:
  command: python app.py
  build: app1/
  net: 'host'

app2:
  command: python app.py
  build: app2/
  net: 'host'

Additionally you should start your app1 in daemon mode and app2 in the foreground mode for debugging purposes:

docker-compose up -d app1
docker-compose run app2

As soon as you get a request from app1 to app2 you will drop down to (pdb) in app2

quickinsights
  • 1,067
  • 12
  • 18