15

I've tried to find documentation about how in a Jenkinsfile pipeline catching the error that occurs when a user cancels a job in jenkins web UI.

I haven't got the postor try/catch/finally approaches to work, they only work when something fails within the build.

This causes resources not to be free'd up when someone cancels a job.

What I have today, is a script within a declarative pipeline, like so:

pipeline {
  stage("test") {
    steps {
      parallell (
        unit: {
          node("main-builder") {
            script {
              try { sh "<build stuff>" } catch (ex) { report } finally { cleanup }
            }
          }
        }
      )
    }
  }
}

So, everything within catch(ex) and finally blocks is ignored when a job is manually cancelled from the UI.

Wrench
  • 4,070
  • 4
  • 34
  • 46

2 Answers2

15

Non-declarative approach:

When you abort pipeline script build, exception of type org.jenkinsci.plugins.workflow.steps.FlowInterruptedException is thrown. Release resources in catch block and re-throw the exception.

import org.jenkinsci.plugins.workflow.steps.FlowInterruptedException

def releaseResources() {
  echo "Releasing resources"
  sleep 10
}

node {
  try {
    echo "Doing steps..."
  } catch (FlowInterruptedException interruptEx) {
    releaseResources()
    throw interruptEx
  }
}

Declarative approach (UPDATED 11/2019):

According to Jenkins Declarative Pipeline docs, under post section:

cleanup

Run the steps in this post condition after every other post condition has been evaluated, regardless of the Pipeline or stage’s status.

So that should be good place to free resources, no matter whether the pipeline was aborted or not.

def releaseResources() {
  echo "Releasing resources"
  sleep 10
}

pipeline {
  agent none
  stages {
    stage("test") {
      steps {
        parallel (
          unit: {
            node("main-builder") {
              script {
                echo "Doing steps..."
                sleep 20
              }
            }
          }
        )
      }
      post {
        cleanup {
          releaseResources()
        }
      }
    }
  }
}
Travenin
  • 1,511
  • 1
  • 13
  • 24
  • This is how I do it, only I didn't need to import. – Jacob Apr 25 '17 at 20:19
  • Interesting. But, I tried without import line and it failed with cause "unable to resolve class FlowInterruptedException". – Travenin Apr 26 '17 at 09:15
  • @Traventin Alright, so this means that even though I'm using script block in declarative pipeline, I can't solve this in the declarative approach, but have to fall back to non-declarative mode? – Wrench Apr 26 '17 at 11:14
  • I can try to find some solution to declarative pipeline. Unfortunately wrapping whole pipeline{} with try-catch doesn't work. – Travenin Apr 26 '17 at 12:09
  • I don't know what kind of system you have there, but another way to free resources is to create a global function which frees resources if necessary, and run it in the _beginning_ of all relevant pipelines jobs. – Travenin Apr 26 '17 at 12:15
  • @Traventin Yes I had that in mind as well, tidying up ghost resources. However, I'm suspecting something else is wrong in my pipeline, verifying that now. If so, I will accept your answer and edit it to add the declarative version I have. For the benefit of others. Stay tuned. – Wrench Apr 27 '17 at 14:41
  • Could you please explicit the declarative approach with an example ? import inside declarative pipeline fails with `Unknown type: IMPORT` – mxdsp Oct 28 '19 at 09:59
  • @mxdsp Sorry for late reaction to your comment. I rewrote the answer about the declarative pipeline approach. I recommend to use "new" `cleanup` post-step condition to free resources. But if you need know whether the pipeline was aborted you can use `aborted { echo "I was aborted!" }` instead. – Travenin Nov 20 '19 at 11:11
1

You can add a post trigger "cleanup" to the stage:

post {
    cleanup {
        script { ... }
        sh "remove lock"
    }
}
c9s
  • 1,888
  • 19
  • 15