2

I have a docker-compose networking issue. So i create my shared space with containers for ubuntu, tensorflow, and Rstudio, which do an excellent job in sharing the volume between them and the host, but when it comes down to using the resources of the one container inside the terminal of each other one, I hit a wall. I can't do as little as calling python in the terminal of the container that doesn't have it. My docker-compose.yaml:

# docker-compose.yml

version: '3'
services:

#ubuntu(16.04)
 ubuntu:
   image: ubuntu_base
   build:
     context: .
     dockerfile: dockerfileBase 
   volumes:
     - "/data/data_vol/:/data/data_vol/:Z"
   networks:
     - default
   ports:
     - "8081:8081"
   tty: true

#tensorflow
 tensorflow:
   image: tensorflow_jupyter
   build:
     context: .
     dockerfile: dockerfileTensorflow
   volumes:
     - "/data/data_vol/:/data/data_vol/:Z"
     - .:/notebooks
   networks:
     - default
   ports:
     - "8888:8888"
   tty: true

#rstudio
 rstudio:
   image: rstudio1
   build:
     context: .
     dockerfile: dockerfileRstudio1
   volumes:
     - "/data/data_vol/:/data/data_vol/:Z"
   networks:
     - default
   environment:
     - PASSWORD=test
   ports:
     - "8787:8787"
   tty: true


volumes:
  ubuntu:
  tensorflow:
  rstudio:

networks:
  default: 
    driver: bridge

I am quite a docker novice, so I'm not sure about my network settings. That being said the docker inspect composetest_default (the default network created for the compose) shows the containers are connected to the network. It is my understanding that in this kind of situation I should be able to freely call one service in each one of the other containers and vice-versa:

"Containers": {
            "83065ec7c84de22a1f91242b42d41b293e622528d4ef6819132325fde1d37164": {
                "Name": "composetest_ubuntu_1",
                "EndpointID": "0dbf6b889eb9f818cfafbe6523f020c862b2040b0162ffbcaebfbdc9395d1aa2",
                "MacAddress": "02:42:c0:a8:40:04",
                "IPv4Address": "192.168.64.4/20",
                "IPv6Address": ""
            },
            "8a2e44a6d39abd246097cb9e5792a45ca25feee16c7c2e6a64fb1cee436631ff": {
                "Name": "composetest_rstudio_1",
                "EndpointID": "d7104ac8aaa089d4b679cc2a699ed7ab3592f4f549041fd35e5d2efe0a5d256a",
                "MacAddress": "02:42:c0:a8:40:03",
                "IPv4Address": "192.168.64.3/20",
                "IPv6Address": ""
            },
            "ea51749aedb1ec28f5ba56139c5e948af90213d914630780a3a2d2ed8ec9c732": {
                "Name": "composetest_tensorflow_1",
                "EndpointID": "248e7b2f163cff2c1388c1c69196bea93369434d91cdedd67933c970ff160022",
                "MacAddress": "02:42:c0:a8:40:02",
                "IPv4Address": "192.168.64.2/20",
                "IPv6Address": ""
            }

A pre-history - I had tried with links: inside the docker-compose but decided to change to networks: on account of some warnings of deprecation. Was this the right way to go about it?

Docker version 18.09.1

Docker-compose version 1.17.1

Momchill
  • 417
  • 6
  • 15
  • In the same compose you have the ability to calle the other container with the service name: ping ubuntu, ping tensoflow, ping rstudio – bdn02 Feb 13 '19 at 10:37
  • I'm sorry, but can you elaborate on this? What is the use of simply pinging the service? I wish to be able to actively use the functionality of each of the containers inside each of the other ones. Say - use tensorflow from within R? To the best of my knowledge this cannot be served with a ping request, can it? – Momchill Feb 18 '19 at 10:07

1 Answers1

0

but when it comes down to using the resources of the one container inside the terminal of each other one, I hit a wall. I can't do as little as calling python in the terminal of the container that doesn't have it.

You cannot use linux programs the are in the bin path of a container from another container, but you can use any service that is designed to communicate over a network from any container in your docker compose file.

Bin path:

$ echo $PATH                                                                                                                                                                                              127 ↵
/home/exadra37/bin:/home/exadra37/bin:/home/exadra37/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin

So the programs in this paths that are not designed to communicate over a network are not usable from other containers and need to be installed in each container you need them,like python.

Exadra37
  • 11,244
  • 3
  • 43
  • 57
  • Sorry if this is a noob question, but - in that case - how do I get them not to be in the bin path, so as to make them usable? In other words - what has to happen for the containers to be interoperable and allow for seamless integration? – Momchill Feb 13 '19 at 10:50
  • You can build a base docker image with all the stuff you want available in all containers and than build all other docker images based on that one. Making programs available in the bin path of container in another container is not possible, unless you connect over ssh to the other container, but this means you are switching to the shell of the other container, plus is bad pratice to ssh into containers, they shouldn't support ssh. – Exadra37 Feb 13 '19 at 10:58
  • Ok, so what's the point of docker-compose then? I thought the idea is not to clutter one container with all the stuff but rather compartmentalize the workings in separate containers that you bring together with a docker-compose? – Momchill Feb 13 '19 at 11:02
  • You bring together the stuff that can communicate over the network and this is nothing to do with docker or docker compose... is how linux works. So each container have is own operating system, that can be Ubuntu in one container and Centos in the other container, thus they are separated OS and docker and docker compose just can bridge them in stuff that is able to communicate via network. – Exadra37 Feb 13 '19 at 11:23
  • So, if I'm interpreting your suggestion correctly I am misusing the nature of docker-compose in general. According to your understanding the docker compose has nothing to do with compartmentalizing standalone docker containers and making them work together in the style of - one for ubuntu, another one for tensorflow and a third one for Rstudio, but rather it is a device to let servers and clients talk to each other. In such a case, you would suggest, to have all of my images in one dockerfile and in the same container running at the same time? Correct? – Momchill Feb 13 '19 at 17:49
  • 1
    You are separating each application in separated containers and that is a best practice, but what you are misunderstanding is that you think that is possible to access programs in other containers that are not a service that communicates over a network. So docker compose connects docker containers over a network and you think that it aggregates/compose all them as one big unit where you get shell in one and have access to programs installed in all. – Exadra37 Feb 13 '19 at 19:12
  • 1
    If you want to compose containers you need to do it with Dockerfiles extending other Dockerfiles, but all them will be built based on the operating system of the first Dockerfile in the chain. – Exadra37 Feb 13 '19 at 19:12
  • I think I am arriving at clarity! So when you say that containers "that are not a service" don't communicate over a network, does that mean that if a container(s) IS(ARE) a service, then they can accomplish this? Concerning dockerfiles - I am connecting the containers through dockerfiles built in the docker-compose.yaml exactly this way - ubuntu (base) -> tensorflow (operational) --> R (postprocessing & visualization). I guess it's clearly not what I've done, but what exactly do you mean by "extending dockerfiles"? – Momchill Feb 13 '19 at 22:13