2

UPDATE – Old question title:
Docker - How to execute unzipped/unpacked/extracted binary files during docker build (add files to docker build context)

--

I've been trying (half a day :P) to execute a binary extracted during docker build.

My dockerfile contains roughly:

...
COPY setup /tmp/setup
RUN \
unzip -q /tmp/setup/x/y.zip -d /tmp/setup/a/b
...

Within directory b is a binary file imcl

Error I'm getting was:

/bin/sh: 1: /tmp/setup/a/b/imcl: not found

What was confusing, was that displaying the directory b (inside the dockerfile, during build) before trying to execute the binary, showed the correct file in place:

RUN ls -la /tmp/setup/a/b/imcl  
-rwxr-xr-x  1 root root 63050 Aug  9  2012 imcl

RUN file /tmp/setup/a/b/imcl  
ELF 32-bit LSB  executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.2.5, not stripped` 

Being a Unix noob at first I thought it was a permission issue (root of the host being different than root of the container or something) but, after checking, the UID was 0 for both so it got even weirder.

Docker asks not to use sudo so I tried with su combinations:

su - -c "/tmp/setup/a/b/imcl"
su - root -c "/tmp/setup/a/b/imcl"

Both of these returned:

stdin: is not a tty
-su: /tmp/setup/a/b: No such file or directory

Well heck, I even went and defied Docker recommendations and changed my base image from debian:jessie to the bloatish ubuntu:14.04 so I could try with sudo :D

Guess how that turned out?

sudo: unable to execute /tmp/setup/a/b/imcl: No such file or directory

Randomly googling I happened upon a piece of Docker docs which I believe is the reason to all this head bashing: "Note: docker build will return a no such file or directory error if the file or directory does not exist in the uploaded context. This may happen if there is no context, or if you specify a file that is elsewhere on the Host system. The context is limited to the current directory (and its children) for security reasons, and to ensure repeatable builds on remote Docker hosts. This is also the reason why ADD ../file will not work."

So my question is:

  • Is there a workaround to this?
  • Is there a way to add extracted files to docker build context during a build (within the dockerfile)?

Oh and the machine I'm building this is not connected to the internet...

I guess what I'm asking is similar to this (though I see no answer):
How to include files outside of Docker's build context?

So am I out of luck?

Do I need to unzip with a shell script before sending the build context to Docker daemon so all files are used exactly as they were during build command?

UPDATE:
Meh, the build context actually wasn't the problem. I tested this and was able to execute unpacked binary files during docker build.

My problem is actually this one: CentOS 64 bit bad ELF interpreter

Using debian:jessie and ubuntu:14.04 as base images only gave No such file or directory error but trying with centos:7 and fedora:23 gave a better error message:

/bin/sh: /tmp/setup/a/b/imcl: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory

So that led me to the conclusion that this is actually the problem of running a 32-bit application on a 64-bit system.

Now the solution would be simple if I had internet access and repos enabled:

apt-get install ia32-libs

Or

yum install glibc.i686

However, I dont... :[

So the question becomes now:

  • What would be the best way to achive the same result without repos or internet connection?

According to IBM, the precise libraries I need are gtk2.i686 and libXtst.i686 and possibly libstdc++

[root@localhost]# yum install gtk2.i686
[root@localhost]# yum install libXtst.i686
[root@localhost]# yum install compat-libstdc++
Community
  • 1
  • 1
straville
  • 986
  • 3
  • 15
  • 21
  • [On Debian/Ubuntu 18.04](https://stackoverflow.com/a/59496421/4970442): `sudo dpkg --add-architecture i386 && sudo apt update && sudo apt install libc6:i386` – Pablo Bianchi Dec 27 '19 at 05:43

2 Answers2

3

UPDATE:

So the question becomes now:

  • What would be the best way to achive the same result without repos or internet connection?

You could use various non-official 32-bit images available on DockerHub, search for debian32, ubuntu32, fedora32, etc.

If you can't trust them, you can build such an image by yourself, and you can find instruction on DockerHub too, e.g.:


Alternatively, you can prepare your own image based on some official image and add 32-bit packages to it.

Say, you can use a Dockerfile like this:

FROM debian:wheezy
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y ia32-libs

...and use produced image as a base (with FROM directive) for images you're building without internet access.

You can even create an automated build on DockerHub that will rebuild your image automatically when your Dockerfile (posted, say, on GitHub) or mainline image (debian in the example above) changes.


No matter how did you obtain an image with 32-bit support (used existing non-official image or built your own), you can then store it to a tar archive using docker save command and then import using docker load command.

Community
  • 1
  • 1
gavv
  • 4,649
  • 1
  • 23
  • 40
  • Building an own image with added 32-bit libs on 64-bit base image on an online machine and docker save + docker load for transferring the image to the offline machine, now why didn't I think of that myself... seeing as this was the exact method I used in transferring the plain base images :D – straville Jan 16 '16 at 02:07
  • Managed to build the image and install the 32bit software inside the container with a custom base image with 32bit libs added. However I ran into problems when trying to run the container after build. It seems that the 32bit libs caused problems when trying to execute 64bit scripts, such as: Cannot start container XXX: [8] System error: exec: "/IBM/WebSphere8/AppServer/profiles/std/bin/startServer.sh …": stat no such file or directory. ...that being said I **strongly** recommend getting matching 32/64bit installation executables/binaries compared to the system. – straville Jan 25 '16 at 11:42
1

You're in luck! You can do this using the ADD command. The docs say:

If <src> is a local tar archive in a recognized compression format (identity, gzip, bzip2 or xz) then it is unpacked as a directory... When a directory is copied or unpacked, it has the same behavior as tar -x: the result is the union of:

  1. Whatever existed at the destination path and
  2. The contents of the source tree, with conflicts resolved in favor of “2.” on a file-by-file basis.
Mr Lister
  • 45,515
  • 15
  • 108
  • 150
code_monk
  • 9,451
  • 2
  • 42
  • 41
  • Happened upon this myself as well... too bad my format is .zip :P Guess I'll need to go with script unpacking and relaying the unzipped files in the initial build context. – straville Jan 13 '16 at 22:24
  • 1
    Sorry this wasn't the true answer after all, I've updated the question. Also, binary execution of unzipped files seems very possible during Docker build. – straville Jan 15 '16 at 15:38
  • 1
    good to know. i'll keep this answer up because it contains useful context – code_monk Jan 15 '16 at 21:09