0

I am running a Jenkins instance on a virtual machine (Note: not in a container, from now on called "Host"). During a Jenkins build I am spinning up a container (from now on called "Builder") on the "Host", which mounts the docker socket from the "Host". Within the started "Builder", I am calling the checkout function to clone my repository and am than building a docker image from a Dockerfile. The resulting image is then visible on the "Host"'s docker. However to my surprise, the files I have checked out in the "Builder", are stored in the Jenkins workspace on the "Host" even after "Builder" exits.

I am not sure why this is the case. The idea of the "Builder" was to have a clean data separation between the "Host" and whatever gets downloaded during the checkout - after the "Builder" shuts down.

The reason behind this is that the created image is then tested on the "Host" and I want to ensure that there is no residue left from the build process without having to push to an artifactory in between.

Now: Is there any way to separate the storage of the "Builder" and "Host"?

EDIT

Please find the Jenkins file I am using below:

node {
  docker.image('ubuntu:17.04').withRun('-v /var/run/docker.sock:/var/run/docker.sock') { c ->
    def checkoutVars = checkout([
      $class: 'GitSCM', 
      branches: [[
        name: '*/master'
      ]],
      doGenerateSubmoduleConfigurations: false, 
      extensions: [], 
      submoduleCfg: [], 
      userRemoteConfigs: [[
        credentialsId: 'credentialsID', 
        url: 'GitHub-URL'
      ]]
    ])
    stage('Build docker image in container') {
      sh "docker build --label build_id=$BUILD_ID --label build_url=$BUILD_URL --label git_commit=" + checkoutVars.GIT_COMMIT + " -t apache2:$BUILD_ID  ."
    }
  }
  stage('Inspect docker image'){
    sh "docker image inspect \$(docker images | awk -vs='2' -ve='2' 'NR>=s&&NR<=e' | awk '{print \$3}')"    
  }
}

EDIT1

When replacing

docker.image('ubuntu:17.04').withRun('-v /var/run/docker.sock:/var/run/docker.sock') { c ->

with:

docker.image('ubuntu:17.04').inside("--volume=/var/run/docker.sock:/var/run/docker.sock") {

I get:

/var/lib/jenkins/workspace/PROJECT@tmp/durable-fc0ee627/script.sh: 2: /var/lib/jenkins/workspace/PROJECT@tmp/durable-fc0ee627/script.sh: docker: not found

However the Git-Repository is still cloned onto "Host".

stiller_leser
  • 1,512
  • 1
  • 16
  • 39

2 Answers2

1

I decided to move away from the Jenkins docker pipeline and use native docker commandos instead. The following command does the trick. NOTE: Do not use the git config part in any situation!

docker run --rm --privileged --name docker-builder \
-v /var/run/docker.sock:/var/run/docker.sock docker:dind \
/bin/sh -c 'apk add --no-cache git && \
git config --global http.sslVerify false \
git clone GIT-URL-GOES-HERE && \
docker build -t IMAGENAME PATH-TO-REPOSITORY/.'
stiller_leser
  • 1,512
  • 1
  • 16
  • 39
0

Jenkins allocates a workspace that is mounted into your container, so Jenkins is already doing this. Depending how separated you want it, you have several options:

  1. Using the ws statement
  2. Wiping out the workspace
  3. Using the skipDefaultCheckout option to prevent automatic checkouts
Hendrik M Halkow
  • 2,208
  • 15
  • 24
  • Thanks for the heads up. I am indeed using clearWs() and deleteDir() for cleanup. However in a perfect world I would like Jenkins not to mount data in my container, but let the container store the data, so that it is removed when the container is shut down (without the need of having to call Jenkins functions). If I have to use the Jenkins functions, I can directly run `docker build` on the "Host" with no need of having a separate container. This might actually make more send, just evaluating my options here. – stiller_leser Feb 16 '18 at 10:26