6

This is how I'm using kaniko to build docker images in my gitlab CI, which is working great.

But I need to read a json file to get some values. Therefore I need to get access to jq.

.gilab-ci.yml

deploy:
  stage: deployment
  image:
    name: gcr.io/kaniko-project/executor:debug
    entrypoint: [""]
  script:
    - mkdir -p /kaniko/.docker
    - echo "{\"auths\":{\"$CI_REGISTRY\":{\"auth\":\"$(echo -n ${CI_REGISTRY_USER}:${CI_REGISTRY_PASSWORD} | base64)\"}}}" > /kaniko/.docker/config.json
    - |
      /kaniko/executor \
        --context $CI_PROJECT_DIR \
        --dockerfile $CI_PROJECT_DIR/app/Dockerfile \
        --destination $CI_REGISTRY_IMAGE/app:latest \
      done
    - jq # <- Is not working, as jq is not installed

Is it possible to add jq to the image to avoid installing it always and repeatedly on this stage?

On all other stages I'm using my own alpine image to which I added everything I need in my CI pipeline. So another option would be to add kaniko to this image - if possible. That would result in one image which has all utilities needed.

Dockerfile

FROM alpine:3.14.2

RUN apk --update add \
  bash \
  curl \
  git \
  jq \
  npm
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.21.4/bin/linux/amd64/kubectl
RUN chmod u+x kubectl && mv kubectl /bin/kubectl
# Add kaniko to this image??
user3142695
  • 15,844
  • 47
  • 176
  • 332

4 Answers4

8

Official Kaniko Docker image is built from scratch using standalone Go binaries (see Dockerfile from Kaniko's GitHub repository). You can re-use the same binaries from official image and copy them in your image such as:

# Use this FROM instruction as shortcut to use --copy=from kaniko below
# It's also possible to use directly COPY --from=gcr.io/kaniko-project/executor
FROM gcr.io/kaniko-project/executor AS kaniko

FROM alpine:3.14.2

RUN apk --update add \
  bash \
  curl \
  git \
  jq \
  npm
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.21.4/bin/linux/amd64/kubectl
RUN chmod u+x kubectl && mv kubectl /bin/kubectl

#
# Add kaniko to this image by re-using binaries and steps from official image
#
COPY --from=kaniko /kaniko/executor /kaniko/executor
COPY --from=kaniko /kaniko/docker-credential-gcr /kaniko/docker-credential-gcr
COPY --from=kaniko /kaniko/docker-credential-ecr-login /kaniko/docker-credential-ecr-login
COPY --from=kaniko /kaniko/docker-credential-acr /kaniko/docker-credential-acr
COPY --from=kaniko /etc/nsswitch.conf /etc/nsswitch.conf
COPY --from=kaniko /kaniko/.docker /kaniko/.docker

ENV PATH $PATH:/usr/local/bin:/kaniko
ENV DOCKER_CONFIG /kaniko/.docker/
ENV DOCKER_CREDENTIAL_GCR_CONFIG /kaniko/.config/gcloud/docker_credential_gcr_config.json

EDIT: for the debug image, Dockerfile would be:

FROM gcr.io/kaniko-project/executor:debug AS kaniko

FROM alpine:3.14.2

RUN apk --update add \
  bash \
  curl \
  git \
  jq \
  npm
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.21.4/bin/linux/amd64/kubectl
RUN chmod u+x kubectl && mv kubectl /bin/kubectl

#
# Add kaniko to this image by re-using binaries and steps from official image
#
COPY --from=kaniko /kaniko/ /kaniko/
COPY --from=kaniko /kaniko/warmer /kaniko/warmer
COPY --from=kaniko /kaniko/docker-credential-gcr /kaniko/docker-credential-gcr
COPY --from=kaniko /kaniko/docker-credential-ecr-login /kaniko/docker-credential-ecr-login
COPY --from=kaniko /kaniko/docker-credential-acr /kaniko/docker-credential-acr
COPY --from=kaniko /kaniko/.docker /kaniko/.docker
COPY --from=busybox:1.32.0 /bin /busybox

ENV PATH $PATH:/usr/local/bin:/kaniko:/busybox
ENV DOCKER_CONFIG /kaniko/.docker/
ENV DOCKER_CREDENTIAL_GCR_CONFIG /kaniko/.config/gcloud/docker_credential_gcr_config.json

Note that you need to use gcr.io/kaniko-project/executor:debug (for latest version) or gcr.io/kaniko-project/executor:v1.6.0-debug as source (or another tag)


Tested building a small image, seems to work fine:

# Built above example with docker build . -t kaniko-alpine
# And ran container with docker run -it kaniko-alpine sh
echo "FROM alpine" > Dockerfile
echo "RUN echo hello" >> Dockerfile
echo "COPY Dockerfile Dockerfile" >> Dockerfile

executor version
executor -c . --no-push

# Output like:
#
# Kaniko version :  v1.6.0
#
# INFO[0000] Retrieving image manifest alpine             
# INFO[0000] Retrieving image alpine from registry index.docker.io 
# INFO[0000] GET KEYCHAIN                                 
# [...] 
# INFO[0001] RUN echo hello                               
# INFO[0001] Taking snapshot of full filesystem...        
# INFO[0001] cmd: /bin/sh                                 
# INFO[0001] args: [-c echo hello]                        
# INFO[0001] Running: [/bin/sh -c echo hello]             
# [...]

Note that using Kaniko binaries outside of their official image is not recommended, even though it may still work fine:

kaniko is meant to be run as an image: gcr.io/kaniko-project/executor. We do not recommend running the kaniko executor binary in another image, as it might not work.

Pierre B.
  • 11,612
  • 1
  • 37
  • 58
  • I need to use the debug version as only in this version there is busybox included which is needed in my CI. https://github.com/GoogleContainerTools/kaniko/blob/master/deploy/Dockerfile_debug So there is only the difference in the added busybox? Could you please adapt your answer using this Dockerfile? – user3142695 Sep 20 '21 at 18:15
  • Sure, I edited the question. Looks like the diffs are: adding busybox and a few diff in how some binaries are built – Pierre B. Sep 21 '21 at 08:21
  • 2
    It's important to note that this Dockerfile can't be built with kaniko, it must be built with docker/buildkit or maybe `buildah bud`. Even the kaniko project uses docker to build its own executor images. – Waddles Apr 17 '23 at 11:43
3

I had the same need, as the image was to be used as the basis for a job in a Gitlab CI. I had to make some small modifications to make it work. If it helps, here is my version (no need for kubectl in my case, I just needed to be run kaniko and vault in the same container) :

  • Added libcap to address this issue : "Operation not Permitted" when running Vault in a container. Only secure when not using vault as a server.
  • Added missing env variable SSL_CERT_DIR
  • Removed Busybox (not needed anymore as we're running an Alpine container)
  • Optional kaniko executor entrypoint
FROM gcr.io/kaniko-project/executor:debug AS kaniko
FROM alpine:3.14.2

RUN apk --update add jq vault libcap
RUN setcap cap_ipc_lock= /usr/sbin/vault

COPY --from=kaniko /kaniko/ /kaniko/

ENV PATH $PATH:/usr/local/bin:/kaniko
ENV DOCKER_CONFIG /kaniko/.docker/
ENV DOCKER_CREDENTIAL_GCR_CONFIG /kaniko/.config/gcloud/docker_credential_gcr_config.json
ENV SSL_CERT_DIR /kaniko/ssl/certs

#ENTRYPOINT ["/kaniko/executor"]
Joulss
  • 1,040
  • 1
  • 14
  • 26
  • This command makes troubles as it neglects other certificates: ENV SSL_CERT_DIR /kaniko/ssl/certs It's better to use these: RUN cp /kaniko/ssl/certs/ca-certificates.crt /usr/local/share/ca-certificates/ca-certificates-kaniko.crt RUN update-ca-certificates – Konstantin Polyntsov Feb 06 '23 at 13:03
3

With kaniko 1.9.2 and alpine 3.17.3, I had errors in libssl libcrypto symlik :

Couldn't eval /usr/lib/libcrypto.so.3 with link /usr/lib/libcrypto.so.3
Couldn't eval /usr/lib/libssl.so.3 with link /usr/lib/libssl.so.3

To fix it :

# Install and fix libssl / libcrypto
RUN apk update && \
    apk add --no-cache  \
      ca-certificates \
      curl \
      unzip && \
    rm -rf /var/cache/apk/* && \
    rm -f /usr/lib/libssl.so.3 && \
    rm -f /usr/lib/libcrypto.so.3 && \
    ln /lib/libssl.so.3 /usr/lib/libssl.so.3  && \
    ln /lib/libcrypto.so.3 /usr/lib/libcrypto.so.3

0

Since I can't comment on @Pierre B.'s answer I am writing my comment as a separate answer. In general his solution worked for me, but I had to change the following line

COPY --from=kaniko /kaniko/docker-credential-acr /kaniko/docker-credential-acr

to

COPY --from=kaniko /kaniko/docker-credential-acr-env /kaniko/docker-credential-acr-env

I hope that this helps further.

mokanina
  • 31
  • 2