I'm trying to build an aarch64 image using GitLab's Docker-in-Docker configuration for a Runner with the docker executor. Ideally the build should be done using docker-compose and specifying the build platform in the Dockerfiles.
The config.toml
's runners section looks like this:
[[runners]]
name = "myrunner"
url = "xxx"
token = "xxx"
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "alpine:latest"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/certs/client", "/cache"]
shm_size = 0
The .gitlab-ci.yml
like this:
stages:
- build
variables:
DOCKER_BUILDKIT: "1"
COMPOSE_DOCKER_CLI_BUILD: "1"
DOCKER_TLS_CERTDIR: "/certs"
services:
- docker:20.10.16-dind
docker_build:
stage: build
image: docker:20.10.16
before_script:
- docker info
- docker-compose --version
- docker buildx version
script:
- ...
- docker-compose build
The before_script
output:
$ docker info
Client:
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc., v0.8.2)
compose: Docker Compose (Docker Inc., v2.5.1)
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 20.10.16
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 212e8b6fa2f44b9c21b2798135fc6fb7c53efc16
runc version: v1.1.1-0-g52de29d7
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 5.4.0-73-generic
Operating System: Alpine Linux v3.15 (containerized)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.775GiB
Name: b90e885189b8
WARNING: No swap limit support
ID: ULYB:ANXP:MZRN:BTTF:BIER:OCV3:SVF5:HUAB:BW2V:SDSJ:ZOMU:ZZWZ
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
$ docker-compose --version
Docker Compose version v2.5.1
$ docker buildx version
github.com/docker/buildx v0.8.2 6224def4dd2c3d347eee19db595348c50d7cb491
For good measure, I also enabled BuildKit in the host's daemon.json:
{ "features": { "buildkit": true } }
However, if I try to build from a Dockerfile that specifies aarch64 as the build platform the build will fail, for example at the first npm install
. (amd64 images from the same Dockerfile and Runner are building without issues).
Example Dockerfile:
ARG BUILD_PLATFORM
FROM --platform=$BUILD_PLATFORM node:lts-alpine
RUN npm install -g http-server
WORKDIR /svelte-frontend
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
CMD http-server public --port 3000 --proxy http://localhost:3000?
I have also tried using Docker socket binding with the same result.
UPDATE:
I have tried using buildx bake
with the following commands:
...
script:
# Create new docker-compose file with substituted variables
# for use with buildx bake
- docker-compose config > docker-compose.buildx.yml
# Build images using buildx bake
- docker buildx bake -f docker-compose.buildx.yml
But the build still fails, even though buildx is now clearly being used.
UPDATE 2:
Re-initializing the binfmt handlers as desribed in this answer did the trick for me. This means adding the following to the before_script
:
...
before_script:
- docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
- ...