SUMMARY
The tasks are "all done" if the count of SUCCESS, FAILED, UPSTREAM_FAILED, SKIPPED tasks is greater than or equal to the count of all upstream tasks.
Not sure why it would be greater than? Perhaps subdags do something weird to the counts.
Tasks are "all success" if the count of upstream tasks and the count of success upstream tasks is the same.
DETAILS
The code for evaluating trigger rules is here https://github.com/apache/incubator-airflow/blob/master/airflow/ti_deps/deps/trigger_rule_dep.py#L72
- ALL_DONE
The following code runs the qry
and returns the first row (the query is an aggregation that will only ever return one row anyway) into the following variables:
successes, skipped, failed, upstream_failed, done = qry.first()
the "done" column in the query corresponds to this: func.count(TI.task_id)
in other words a count of all the tasks matching the filter.
The filter specifies that it is counting only upstream tasks, from the current dag, from the current execution date and this:
TI.state.in_([
State.SUCCESS, State.FAILED,
State.UPSTREAM_FAILED, State.SKIPPED])
So done
is a count of the upstream tasks with one of those 4 states.
Later there is this code
upstream = len(task.upstream_task_ids)
...
upstream_done = done >= upstream
And the actual trigger rule only fails on this
if not upstream_done
- ALL_SUCCESS
The code is fairly straightforward and the concept is intuitive
num_failures = upstream - successes
if num_failures > 0:
... it fails