147

I am trying to create a shell script for setting up a docker container. My script file looks like:

#!bin/bash

docker run -t -i -p 5902:5902 --name "mycontainer" --privileged myImage:new /bin/bash

Running this script file will run the container in a newly invoked bash.

Now I need to run a script file (test.sh)which is already inside container from the above given shell script.(eg: cd /path/to/test.sh && ./test.sh) How to do that?

Jonathan Hall
  • 75,165
  • 16
  • 143
  • 189
zappy
  • 1,864
  • 3
  • 18
  • 36
  • 1
    Why not use `WORKDIR` and `CMD`? – Dharmit Jul 23 '15 at 05:33
  • 1
    You probably don't want to be using --privileged here. See: https://stackoverflow.com/questions/36425230/privileged-containers-and-capabilities – CMP May 29 '18 at 22:26

10 Answers10

180

You can run a command in a running container using docker exec [OPTIONS] CONTAINER COMMAND [ARG...]:

docker exec mycontainer /path/to/test.sh

And to run from a bash session:

docker exec -it mycontainer /bin/bash

From there you can run your script.

Display name
  • 753
  • 10
  • 28
Javier Cortejoso
  • 8,851
  • 3
  • 26
  • 27
  • 9
    what if i need to enter into /bin/bash first and then run command inside that bash? – zappy Jul 23 '15 at 11:33
  • 92
    You can also run a local script from the host directly `docker exec -i mycontainer bash < mylocal.sh` This reads the local host script and runs it inside the container. You can do this with other things (like .tgz files piped into tar) - its just using the '-i' to pipe into the container process std input. – Marvin Dec 08 '17 at 15:32
  • @Marvin whats the equivalent in PowerShell? the "<" character is not recognized. – Nicekiwi Feb 18 '20 at 00:10
  • 2
    I'm not a powershell guru (thankfully) but I wandered around SO and found https://stackoverflow.com/a/11788475/500902. So, maybe ```Get-Content mylocal.sh | docker exec -i mycontainer bash```. I don't know if that works though. – Marvin Feb 18 '20 at 12:12
  • 4
    For me it was: `docker exec -i containerID /bin/sh < someLocalScript.sh` – Charden Daxicen Jan 11 '22 at 07:23
  • Note that piping in a local script like @Marvin did it, has the negative side-effect that a option `set -e` inside the executed script has **no** effect! – FireEmerald Jan 26 '22 at 15:58
  • is there a way to do it with docker run docker run [OPTIONS] IMAGE [COMMAND] [ARG...] i accepted to pass in my command node app/my_file.js and it would work – eran otzap Sep 20 '22 at 05:36
  • Inline script in the host works as well by using here-document `docker exec -i mycontainer bash < – Fumisky Wells Sep 28 '22 at 01:21
148

Assuming that your docker container is up and running, you can run commands as:

docker exec mycontainer /bin/sh -c "cmd1;cmd2;...;cmdn"
Raghwendra Singh
  • 2,084
  • 1
  • 13
  • 13
  • 2
    I like this answer; you don't have to log into the docker container to execute a command or set of commands. Thank you! – Hatem Jaber Jan 20 '16 at 22:04
  • Do you know how to take this a step further and pass the entire command (`/bin/sh -c "cmd1; cmd2; ...; cmdn"`) as the value of a shell variable? I ask because 'docker run' seems to expect a single command and individual unquoted arguments rather than a quoted string. – davidA Sep 12 '16 at 05:13
  • @meowsqueak: This answer tells you how to run multiple commands inside an already created and running container without logging in that container, which is helpful in automation. However if you want to run multiple commands at the time of container creation (PS: docker run command creates and starts the container), you can achieve that by following answers in this same thread https://stackoverflow.com/a/41363989/777617 – Raghwendra Singh Aug 28 '17 at 10:54
  • In place of `cmd1`, I need to pass in an entire for-loop - but it expects `;` after the *loop* statement and *do* statement. How can I run the entire for-loop as a single command i.e `cmd1`? Is it possible? – Nicholas K Jun 02 '20 at 10:08
  • exactly what i needed! – bwl1289 Nov 15 '21 at 22:21
28

I was searching an answer for this same question and found ENTRYPOINT in Dockerfile solution for me.

Dockerfile

...
ENTRYPOINT /my-script.sh ; /my-script2.sh ; /bin/bash

Now the scripts are executed when I start the container and I get the bash prompt after the scripts has been executed.

Tassoman
  • 67
  • 6
tojo
  • 606
  • 6
  • 10
11

In case you don't want (or have) a running container, you can call your script directly with the run command.

Remove the iterative tty -i -t arguments and use this:

    $ docker run ubuntu:bionic /bin/bash /path/to/script.sh

This will (didn't test) also work for other scripts:

    $ docker run ubuntu:bionic /usr/bin/python /path/to/script.py
Thomio
  • 1,403
  • 15
  • 19
9

This command worked for me

cat local_file.sh | docker exec -i container_name bash
h.aittamaa
  • 329
  • 3
  • 4
7

You could also mount a local directory into your docker image and source the script in your .bashrc. Don't forget the script has to consist of functions unless you want it to execute on every new shell. (This is outdated see the update notice.)

I'm using this solution to be able to update the script outside of the docker instance. This way I don't have to rerun the image if changes occur, I just open a new shell. (Got rid of reopening a shell - see the update notice)

Here is how you bind your current directory:

docker run -it -v $PWD:/scripts $my_docker_build /bin/bash

Now your current directory is bound to /scripts of your docker instance.

(Outdated) To save your .bashrc changes commit your working image with this command:

docker commit $container_id $my_docker_build

Update

To solve the issue to open up a new shell for every change I now do the following:

In the dockerfile itself I add RUN echo "/scripts/bashrc" > /root/.bashrc". Inside zshrc I export the scripts directory to the path. The scripts directory now contains multiple files instead of one. Now I can directly call all scripts without having open a sub shell on every change.

BTW you can define the history file outside of your container too. This way it's not necessary to commit on a bash change anymore.

Devpool
  • 701
  • 7
  • 6
  • Too late..! @javier already shows a straight forward solution !! I feel that one is still better. – zappy Mar 15 '18 at 12:55
  • 2
    @zappy the solution from javier did not solve this problem conveniently for me - but my solution did, I thought it would be interesting for those who had a similar problem where they don't want to restart the docker image(s) to update a view functions they need. For example if you use multiple docker images at once to spin up a dev cluster you don't want to restart them all the time. – Devpool Mar 20 '18 at 07:29
7

Thomio's answer is helpful but it expects the script to exist inside the image. If you have a one-of script that you want to run/test inside a container (from command-line or to be useful in a script), then you can use

$ docker run ubuntu:bionic /bin/bash -c '
  echo "Hello there"
  echo "this could be a long script"
  '
Ganesh Pendyala
  • 371
  • 4
  • 7
1

Have a look at entry points too. You will be able to use multiple CMD https://docs.docker.com/engine/reference/builder/#/entrypoint

Boris
  • 1,093
  • 2
  • 14
  • 22
1

This is old, and I don't have enough reputation points to comment. Still, I guess it is worth sharing how one can generalize Marvin's idea to allow parameters.

docker exec -i mycontainer bash -s arg1 arg2 arg3 < mylocal.sh
0

If you want to run the same command on multiple instances you can do this :

for i in c1 dm1 dm2 ds1 ds2 gtm_m gtm_sl; do docker exec -it $i /bin/bash -c "service sshd start"; done
DMin
  • 10,049
  • 10
  • 45
  • 65