1

I am running a docker build and it is taking a really long time. I'd like to know what it is doing, but the stdout seems to have stopped (probably because some limit was reached?). It feels like this build is running inside of a container and I'd like to know what it is and perhaps exec a command in it or attach to its stdout to see what is going on. Is this possible? Are there other ways to troubleshoot long running docker builds?

Last output:

=> [ 6/10] RUN curl -fsSL https://boostorg.jfrog.io/artifactory/main/release/1.76.0/source/boost_1_76_0.tar.gz | tar -xzf -     && cd boost_1_76_0    5182.6s
 => => #   1850K .......... .......... .......... .......... .......... 93% 3.06M 0s
 => => #   1900K .......... .......... .......... .......... .......... 95% 2.36M 0s
 => => #   1950K .......... .......... .......... .......... .......... 97% 4.45M 0s
 => => #   2000K .......... .......... .......... .......... .         100% 12.7M=0.9s
 => => # 2021-06-24 02:34:07 (2.13 MB/s) - 'pcre-8.44.tar.gz'  saved [2090750/2090750]
 => => # /bin/sh: ./config.rpath: No such file or directory
MK.
  • 33,605
  • 18
  • 74
  • 111
  • can you paste the last lines of build log? – Lei Yang Jun 24 '21 at 03:38
  • i added the output, though not sure it matters. I feel like at some point there was also something about the max log size of 1MiB being reached and logging stopping, so I think that's why I'm not seeing it change. HyperV is eating up lots of CPU so I think build is running. – MK. Jun 24 '21 at 03:46
  • the build is dependent on some online resource, but the resource cannot be accessed from the build machine. it is quite common problem. – Lei Yang Jun 24 '21 at 03:48
  • If you run the docker command and don't specify the `-d` (detach) option, the container will be attached to your shell and you'll see all the output as it occurs. – Hans Kilian Jun 24 '21 at 07:45
  • It's common to combine multiple commands into a single `RUN` instruction for a couple of reasons, but this can hinder debugging; try splitting up this line into multiple parts. The build does in fact run inside containers, and if it's really taking 45 minutes, it's not unreasonable to try to find the container and exec a shell in it. – David Maze Jun 24 '21 at 10:19
  • @LeiYang no, it was building. – MK. Jun 24 '21 at 13:54
  • @DavidMaze but how do i find that container? – MK. Jun 24 '21 at 13:55

2 Answers2

0

You can attach to a running container using the command

docker attach <container name>

This'll let you see the stdout and stderr output as it occurs. You detach from it again by pressing ctrl-p followed by ctrl-q.

If you want to start a shell running in the container to issue commands, you can do it using the command

docker exec -it <container name> /bin/bash

If Bash isn't installed or you want to use another shell, you can of course specify that instead. Note that you have to use the container name and not the image name. You can see the container name using the command docker ps.

Hans Kilian
  • 18,948
  • 1
  • 26
  • 35
  • 1
    there is nothing in docker ps . Is there an actual container running for the build? – MK. Jun 24 '21 at 13:56
0

Not an answer, but rather a couple of suggestions that might help.

It looks like you "fetch" the resources from the remote machine, so probably it has something to do with a network latency.

To eliminate all other reasons, you can:

  • remove all the relevant images from the build machine (docker rmi)
  • run docker pull <whatever you want to pull>. This will let give a feeling about the transfer speed.

Other than that, since docker is a client server system, probably you'll want to examine the logs of Docker itself. The actual path to docker daemon log varies depending on the OS, but I've found This SO thread that summarizes this stuff pretty well.

Yet another concern, you say, docker build takes a long time, is it only the first build (where it actually pulls everything from the remote repo) or also the subsequent builds (since the upper layers of the image are already pre-cached my assumption is that it should work way faster)? Of course it can really depend on your actual setup: for example what if you're running jenkins slaves in docker so that they start-up "clean" and the actual pull of all the layers happens every time... In this case probably introducing tools like Nexus that can act as an intermediate docker registry can boost the performance.

Mark Bramnik
  • 39,963
  • 4
  • 57
  • 97