10

I'm looking for a way for a user to be able to execute a limited set of commands on the host, while only accessing it from containers/browser. The goal is to prevent the need for SSH'ing to the host just to run commands occasionally like make start, make stop, etc. These make commands just execute a series of docker-compose commands and are needed sometimes in dev.

The two possible ways in I can think of are:

  • Via cloud9 terminal inside browser (we'll already be using it). By default this terminal only accesses the container itself of course.
  • Via a custom mini webapp (e.g. node.js/express) with buttons that map to commands. This would be easy to do if running on the host itself, but I want to keep all code like this as containers.
rgareth
  • 3,377
  • 5
  • 23
  • 35
  • Thank you for the clarifications that accessing host processes is against docker methodology. I guess then the answer is that I need a non-docker process (e.g. webserver) that runs directly on the host instead of instead of inside a container. – rgareth Jul 30 '15 at 11:50
  • Does this answer your question? [How to run shell script on host from docker container?](https://stackoverflow.com/questions/32163955/how-to-run-shell-script-on-host-from-docker-container) – Maicon Mauricio Feb 24 '23 at 13:52

4 Answers4

14

Although it might not be best practice it is still possible to control the host from inside a container. If you are running docker-compose commands you can bind mount the docker socket by using -v /var/run/docker.sock:/var/run/docker.sock on ubuntu. If you want to use other system tools you will have to bind mount all required volumes using -v this gets really tricky and tedious when you want to use system bins that use /lib.*.so files.

If you need to use sudo commands don't forget to add --privileged flag when running the container

AnandKumar Patel
  • 980
  • 9
  • 18
4

Named Pipes can be very useful to run commands on host machine from docker. Your question is very similar to this one

The solution using named pipes was also given in the same question. I had tried and tested this approach and it works perfectly fine.

helvete
  • 2,455
  • 13
  • 33
  • 37
-6

That approach would be against the docker concepts of process/resources encapsulation. With docker you encapsulate processes completely from the host and from each other (unless you link containers/volumes). From within a docker container you cannot see any processes running on the host due to process namespaces. When you now want to execute processes on the host from within a container that would be against the docker methodology.

Henrik Sachse
  • 51,228
  • 7
  • 46
  • 59
-7

A container is not supposed to break out and access the host. Docker is (amongst other things) process isolation. You may find various tricks to execute some code on the host, when you set it up, though.

user2915097
  • 30,758
  • 6
  • 57
  • 59
  • I disagree with this actually, some managers want to do this because it's a self contained way of packaging software and / or includes all dependencies. – Owl Jun 19 '23 at 20:41