1

I have the followingrepo structure:

/
  /dir1
    /file
  /dir2
    /file

Is it possible to build Azure DevOps condition to execute particular job when ./dir1/file changed and other job when ./dir2/file changed?

kagarlickij
  • 7,327
  • 10
  • 36
  • 71

3 Answers3

2

We do have a condition to control if a job should be run or not. But it's not based on path filters.

You can specify the conditions under which each job runs. By default, a job runs if it does not depend on any other job, or if all of the jobs that it depends on have completed and succeeded. You can customize this behavior by forcing a job to run even if a previous job fails or by specifying a custom condition.

You don't have to run the jobs according to source path. As a workaround, you can create two pipelines to separate jobs and in the trigger determine which will run when with the Path filters:

On the Triggers tab, there is an option to specify the source path to the project you want to build. When that source path is specified, only commits which contain modifications that match the include/exclude rules will trigger a build.

enter image description here

PatrickLu-MSFT
  • 49,478
  • 5
  • 35
  • 62
1

From what I know this is not possible for particulsr job. Documentation explains only how it can be done for whole pipeline.

Here for instance syntax for job and there is no trigger options and you will not find here trigger options:

- job: string  # name of the job, A-Z, a-z, 0-9, and underscore
  displayName: string  # friendly name to display in the UI
  dependsOn: string | [ string ]
  condition: string
  strategy:
    parallel: # parallel strategy
    matrix: # matrix strategy
    maxParallel: number # maximum number simultaneous matrix legs to run
    # note: `parallel` and `matrix` are mutually exclusive
    # you may specify one or the other; including both is an error
    # `maxParallel` is only valid with `matrix`
  continueOnError: boolean  # 'true' if future jobs should run even if this job fails; defaults to 'false'
  pool: pool # see pool schema
  workspace:
    clean: outputs | resources | all # what to clean up before the job runs
  container: containerReference # container to run this job inside
  timeoutInMinutes: number # how long to run the job before automatically cancelling
  cancelTimeoutInMinutes: number # how much time to give 'run always even if cancelled tasks' before killing them
  variables: { string: string } | [ variable | variableReference ] 
  steps: [ script | bash | pwsh | powershell | checkout | task | templateReference ]
  services: { string: string | container } # container resources to run as a service container

Here is documentatiom for jobs.

Krzysztof Madej
  • 32,704
  • 10
  • 78
  • 107
0

We had the same scenario, but we could not use separate pipelines because of gatekeepers that would have had to approve the same release multiple times for different pipelines (API, DB, UI etc.)

We solved it using a solution we found here

Reynier Booysen
  • 281
  • 2
  • 14