65

Airflow is randomly not running queued tasks some tasks dont even get queued status. I keep seeing below in the scheduler logs

 [2018-02-28 02:24:58,780] {jobs.py:1077} INFO - No tasks to consider for execution.

I do see tasks in database that either have no status or queued status but they never get started.

The airflow setup is running https://github.com/puckel/docker-airflow on ECS with Redis. There are 4 scheduler threads and 4 Celery worker tasks. For the tasks that are not running are showing in queued state (grey icon) when hovering over the task icon operator is null and task details says:

    All dependencies are met but the task instance is not running. In most cases this just means that the task will probably be scheduled soon unless:- The scheduler is down or under heavy load

Metrics on scheduler do not show heavy load. The dag is very simple with 2 independent tasks only dependent on last run. There are also tasks in the same dag that are stuck with no status (white icon).

Interesting thing to notice is when I restart the scheduler tasks change to running state.

l0n3r4n83r
  • 1,271
  • 1
  • 14
  • 25
  • We need a bit more info on your Airflow setup, such as the Airflow config and the DAG(s) that are working / not working. Keep in mind that Airflow will only put so many tasks into queued state at a time (not infinitely into the future). As those transition to running, more will move from null state to scheduled to queued. Are you using CeleryExecutor or something else? If so, have you started a Celery worker? – Taylor D. Edmiston Feb 28 '18 at 23:32
  • @TaylorEdmiston I added some details in there – l0n3r4n83r Mar 01 '18 at 18:13
  • @TaylorEdmiston the queued tasks start running on restarting the scheduler – l0n3r4n83r Mar 02 '18 at 00:11
  • @tobi6 happens for tasks not dependent on past – l0n3r4n83r Mar 02 '18 at 22:07
  • @l0n3r4ng3r I've added an answer below with some more context – Taylor D. Edmiston Mar 03 '18 at 19:52
  • 1
    We are having the same problem sporadically. Restarting the scheduler every 10 minutes/hour seems like an insane solution, however it's where we are headed. I'd encourage you to submit an issue to Jira. – Teresa Jun 26 '19 at 01:13

15 Answers15

110

Airflow can be a bit tricky to setup.

  • Do you have the airflow scheduler running?
  • Do you have the airflow webserver running?
  • Have you checked that all DAGs you want to run are set to On in the web ui?
  • Do all the DAGs you want to run have a start date which is in the past?
  • Do all the DAGs you want to run have a proper schedule which is shown in the web ui?
  • If nothing else works, you can use the web ui to click on the dag, then on Graph View. Now select the first task and click on Task Instance. In the paragraph Task Instance Details you will see why a DAG is waiting or not running.

I've had for instance a DAG which was wrongly set to depends_on_past: True which forbid the current instance to start correctly.

Also a great resource directly in the docs, which has a few more hints: Why isn't my task getting scheduled?.

tobi6
  • 8,033
  • 6
  • 26
  • 41
  • 12
    Another possible reason for tasks being scheduled but not running is that they could be assigned to an undefined pool. – aparkerlue Aug 02 '18 at 09:29
  • 53
    I recently found out the hard and very frustrating way that your third bullet point, making sure the DAG is set to *On* is also a requirement to manually trigger the DAG. This makes no sense to me, why do I have to schedule the DAG to manually trigger it? Do you know if there is something in the airflow docs that explains this design choice as I find it very counter intuitive? – Dan Nov 09 '18 at 12:23
  • 3
    Great checklist! – knutole Oct 23 '19 at 08:33
  • "Do all the DAGs you want to run have a proper schedule which is shown in the web ui?" no matter what type of schedule I try to give the DAG's, all mine come up with "Schedule: None" any tips on getting around this? – VendableFall Nov 27 '19 at 21:50
  • 2
    helpful answer, helped several times – aveLestat May 08 '20 at 14:29
  • 2
    This should be the accepted answer. It's the correct checklist. – kabirbaidhya Sep 11 '20 at 11:21
  • A note for developers on local machines. I activated my python virtual environment for the web ui but forgot to activate it in a new shell for the scheduler. – John David Five Oct 18 '20 at 18:49
16

I'm running a fork of the puckel/docker-airflow repo as well, mostly on Airflow 1.8 for about a year with 10M+ task instances. I think the issue persists in 1.9, but I'm not positive.

For whatever reason, there seems to be a long-standing issue with the Airflow scheduler where performance degrades over time. I've reviewed the scheduler code, but I'm still unclear on what exactly happens differently on a fresh start to kick it back into scheduling normally. One major difference is that scheduled and queued task states are rebuilt.

Scheduler Basics in the Airflow wiki provides a concise reference on how the scheduler works and its various states.

Most people solve the scheduler diminishing throughput problem by restarting the scheduler regularly. I've found success at a 1-hour interval personally, but have seen as frequently as every 5-10 minutes used too. Your task volume, task duration, and parallelism settings are worth considering when experimenting with a restart interval.

For more info see:

This used to be addressed by restarting every X runs using the SCHEDULER_RUNS config setting, although that setting was recently removed from the default systemd scripts.

You might also consider posting to the Airflow dev mailing list. I know this has been discussed there a few times and one of the core contributors may be able to provide additional context.

Related Questions

Taylor D. Edmiston
  • 12,088
  • 6
  • 56
  • 76
  • 1
    "The scheduler should be restarted frequently" - But nowhere is specified how to restart it. Running `airflow scheduler` creates another job of the scheduler. It does not close the old one. – jack Jun 20 '18 at 07:30
  • Hi @jack - To restart the scheduler, press Ctrl-C with it in the foreground to kill the process (like killing any other foreground process from a shell). Then run `$ airflow scheduler` again. I don't think it's safe to run 2 scheduler instances at once because I believe there's a possible race condition; I have not tried running multiple scheduler instances simultaneously myself. – Taylor D. Edmiston Jun 21 '18 at 19:35
  • What if the scheduler is running with `airflow scheduler -D` then there is no `ctrl-c` to press. I think Airflow should prevent a command to restart the scheduler without shutting down and start it again. – jack Jun 24 '18 at 08:26
  • In that case, you'd want to use one of the standard kill commands: `kill`, `pkill`, or `killall`. More info - https://www.tecmint.com/how-to-kill-a-process-in-linux/. – Taylor D. Edmiston Jun 24 '18 at 16:31
  • True but if you do that and you'll go to the Job menu in the UI you will see that the scheduler is on RUNNING. maybe the force closing does not update the table. – jack Jun 25 '18 at 12:59
  • I haven't checked that myself but you might check the heartbeat timeouts in airflow.cfg to see if there's one that checks the scheduler that would update that. I'm not sure about that one. I can say I've done many many restarts across dozens of Airflow instances and this has worked for me since 1.8. – Taylor D. Edmiston Jun 25 '18 at 17:22
  • go to the Job menu in your UI you'll probably have thousands of rows in Running states which is incorrect. I think this has been fixed with a PR from recent days. – jack Jun 26 '18 at 07:38
  • Another option you might try that slipped my mind earlier is using the systemd or upstart scripts included in Airflow. https://github.com/apache/incubator-airflow/tree/master/scripts/ – Taylor D. Edmiston Jun 26 '18 at 18:10
8

Make sure you don't have datetime.now() as your start_date

It's intuitive to think that if you tell your DAG to start "now" that it'll execute "now." BUT, that doesn't take into account how Airflow itself actually reads datetime.now().

For a DAG to be executed, the start_date must be a time in the past, otherwise Airflow will assume that it's not yet ready to execute. When Airflow evaluates your DAG file, it interprets datetime.now() as the current timestamp (i.e. NOT a time in the past) and decides that it's not ready to run. Since this will happen every time Airflow heartbeats (evaluates your DAG) every 5-10 seconds, it'll never run.

To properly trigger your DAG to run, make sure to insert a fixed time in the past (e.g. datetime(2019,1,1)) and set catchup=False (unless you're looking to run a backfill).

By design, an Airflow DAG will execute at the completion of its schedule_interval

That means one schedule_interval AFTER the start date. An hourly DAG, for example, will execute its 2pm run when the clock strikes 3pm. The reasoning here is that Airflow can't ensure that all data corresponding to the 2pm interval is present until the end of that hourly interval.

This is a peculiar aspect to Airflow, but an important one to remember - especially if you're using default variables and macros.

Time in Airflow is in UTC by default

This shouldn't come as a surprise given that the rest of your databases and APIs most likely also adhere to this format, but it's worth clarifying.

Full article and source here

NicoKowe
  • 2,989
  • 2
  • 19
  • 26
  • thanks a lot. I had datetime.today() as start_date and I was not able to run tasks. Once i changes it to any past date.it worked thanks a lot – Vaibhav Sahu Dec 09 '19 at 17:35
6

I also had a similar issue, but it is mostly related to SubDagOperator with more than 3000 task instances in total (30 tasks * 44 subdag tasks).

What I found out is that airflow scheduler mainly responsible for putting your scheduled tasks in to "Queued Slots" (pool), while airflow celery workers is the one who pick up your queued task and put it into the "Used Slots" (pool) and run it.

Based on your description, your scheduler should work fine. I suggest you check your "celery workers" log to see whether there is any error, or restart it to see whether it helps or not. I experienced some issues that celery workers normally go on strike for a few minutes then start working again (especially on SubDagOperator)

Kevin Li
  • 2,068
  • 15
  • 27
  • should 'tasks in 'queue' slots' have empty hostname? we have situation that all 'used slots' are in worker-0, and the rest of 'queued slots' have empty hostname... and we have 6 pods/workers. – soMuchToLearnAndShare Apr 26 '21 at 13:11
  • 1
    Re” while airflow celery workers is the one who pick up your queued task and put it into the "Used Slots" (pool) and run it.”. How do we restart worker gracefully, if we find the worker appears idle? (`Kubectl logs worker-0` Command shows latest activity being many days ago ), while `queued slots` have many tasks queued. – soMuchToLearnAndShare Apr 27 '21 at 02:46
5

One of the very silly reasons could be that the DAG is "paused" which is the default state for the first time. I lost around 2 hrs fighting it. If you are using Airflow Web interface, then this shows up as a toggle next to your DAG in the list

4

I am facing the issue today and found that bullet point 4 from tobi6 answer below worked out and resolved the issue

*'Do all the DAGs you want to run have a start date which is in the past?'*

I am using airflow version v1.10.3

Shahbaz Ali
  • 1,262
  • 1
  • 12
  • 13
4

My problem was one step further, in addition to my tasks being queued, I couldn't see any of my celery workers on the Flower UI. The solution was that, since I was running my celery worker as root I had to make changes in my ~/.bashrc file.

The following steps made it work:

  1. Add export C_FORCE_ROOT=true to your ~/.bashrc file
  2. source ~/.bashrc
  3. Run worker : nohup airflow worker $* >> ~/airflow/logs/worker.logs &

Check your Flower UI at http://{HOST}:5555

Prithu Srinivas
  • 245
  • 1
  • 3
  • 9
  • Executing like root was my problem too, thanks to your answer I could figure it out. However I didn't followed these steps, I created an 'airflow' user instead: `export AIRFLOW_HOME="/opt/airflow"`, then `useradd -ms /bin/bash -d ${AIRFLOW_HOME} airflow`, then `chown -R airflow: ${AIRFLOW_HOME}`, and finally `su airflow -c "nohup airflow flower &"` – Eric Sant'Anna Apr 14 '20 at 21:23
3

I think it's worth mentioning that there's an open issue that can cause tasks to fail to run with no obvious reason: https://issues.apache.org/jira/browse/AIRFLOW-5506

The problem seems to occur when using LocalScheduler connected to a PostgreSQL airflow db, and results in the scheduler logging a number of "Killing PID xxxx" lines. Check the scheduler logs after the DAGs have been stalled without starting any new tasks for a while.

emote_control
  • 745
  • 6
  • 21
2

You can try to stop the webserver and the scheduler:

ps -ef | grep airflow       #show the process id
kill 1234                   #kill the webserver
kill 5678                   #kill the scheduler

Remove the files from the airflow folder if they exist (they will be created again):

airflow-scheduler.err
airflow-scheduler.pid
airflow-webserver.err
airflow-webserver.pid

Start the webserver and the scheduler again.

airflow webserver -D
airflow scheduler -D

-D will make the services run in the background.

dasilvadaniel
  • 413
  • 4
  • 8
2

I had a similar issue of a triggered DAG "running" indefinitely because its first task stuck in "queued" state.

I realized this was because of a "ghost" DAG that actually changed name. It seems that since the DAG has run in the past (had data in the postgresDG) and was referenced as child-DAG in other DAGs, the trigger of the parent DAGs referencing the old name would "resurrect" the old DAG name, but with the new code. Indeed the old DAG name and new DAG code did not match, thus producing an "infinite queued execution" bug.

Solution:

  1. Delete the all the previous DAG runs of the previous DAG-runs with the old name
  2. Restart everything (webserver, worker, executor,...) OR Delete relevant DAGs (with the "delete DAG" button in the UI).

The interpretation of the bug can vary but this fix worked in my case.

Ruben1
  • 61
  • 3
0

One more thing to check is whether "the concurrency parameter of your DAG reached?".

I'd experienced the same situation when some task was shown as NO STATUS.

It turned out that my File_Sensor tasks were run with timeout set up to 1 week, while DAG time out was only 5 hours. That leaded to the case when the Files were missing, many sensors tasked were running at the same time. Which results the concurrency overloaded!

The depending tasks couldn't be started before the sensor task succeed, when the dag timeout, they got NO STATUS.

My solution:

  • Carefully set tasks and DAG timeout
  • Increase dag_concurrency in airflow.cfg file in AIRFLOW_HOME folder.

Please refer to the docs. https://airflow.apache.org/faq.html#why-isn-t-my-task-getting-scheduled

0

I believe this is an issue with celery version 4.2.1 and redis 3.0.1 as described here:

https://github.com/celery/celery/issues/3808

we resolved the issue by downgrading our redis version 2.10.6:

redis==2.10.6

randal25
  • 1,290
  • 13
  • 10
0

In my case, tasks were not being launched because I had for all operators a pool configured and hadn't created it, hence, tasks were not even scheduled. An operator looks like:

foo = DummyOperator(
    task_id='foo',
    dag=dag,
    pool='capser'
)

To create a pool go to Admin > Pools > Create and set slots, for example, 128, which runs successfully for me. You can also configure by using the CLI.

0

counter intuitive UI message! I have spent days on this. So want to elaborate on my specific issue (s).

Each dag has a state. By default the state could be 'pause' or 'not pause'.

The first confusion arises from - what is the default state on startup? The UI message attached seems to indicate that the state is 'not pause' and on clicking the toggle, it pauses.

In reality, the default state is 'pause'. This state can be controlled by settings, environment variables, parameters and UI. I have detailed them below.

The second confusion arises because of the UI again. When we manually trigger a dag which is in the pause state. The UI shows the dag as running (green circle)! But the dag is actually in the 'pause' state. The tasks will not execute unless it is 'un-paused'.

If we read the task instance details. The message would be

Task is in the 'None' state which is not a valid state for execution. The task must be cleared in order to be run.

What is the 'None' state!? And clear which task?!

The actual problem is that the dag is in the pause state. On toggling the dag state the tasks would start to execute.

The pause state of the dag can be changed by

  • clicking the button on the UI.
  • set your particular dag to run, by adding the below parameter to your dag
DAG(dag_id='your-dag', is_paused_upon_creation=True)

  • setting the config variable in airflow.cfg file. (caution: this will start all your dags including the example ones)
dags_are_paused_at_creation = FALSE
  • configuring an environment variable before starting up the scheduler/webserver.(caution: this will start all your dags including the example ones)
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION=False

0

Make sure that your task is assigned to the same queue, that your workers is listening to. This means that in your DAG file you have to set 'queue': 'queue_name' and in your worker configuration you have to set either default_queue = 'queue_name' in the airflow.cfg or AIRFLOW__OPERATORS__DEFAULT_QUEUE: 'queue_name' in the docker-compose.yaml (in case you're using Docker).

tsveti_iko
  • 6,834
  • 3
  • 47
  • 39