16

The Situation

I'm using Laravel Queues to process large numbers of media files, an individual job is expected to take minutes (lets just say up to an hour).

I am using Supervisor to run my queue, and I am running 20 processes at a time. My supervisor config file looks like this:

[program:duplitron-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/duplitron/artisan queue:listen database --timeout=0 --memory=500 --tries=1
autostart=true
autorestart=true
user=duplitron
numprocs=20
redirect_stderr=true
stdout_logfile=/var/www/duplitron/storage/logs/duplitron-worker.log

There are a few oddities that I don't know how to explain or correct:

  1. My jobs fairly consistently fail after running for 60 to 65 seconds.
  2. After being marked as failed the job continues to run even after being marked as failed. Eventually they do end up resolving successfully.
  3. When I run the failed task in isolation to find the cause of the issue it succeeds just fine.

I strongly believe this is a timeout issue; however, I was under the impression that --timeout=0 would result in an unlimited timeout.

The Question

How can I prevent this temporary "failure" job state? Are there other places where a queue timeout might be invoked that I'm not aware of?

Community
  • 1
  • 1
slifty
  • 13,062
  • 13
  • 71
  • 109
  • 2
    Check the `max_execution_time` your php.ini How much does it says? If it's 60 secs, there's your problem. try increasing the timeout. – Willy Pt Dec 28 '15 at 03:34
  • Great thought @WillyPt -- wouldn't the php.ini settings terminate the entire script, though? It continues to resolve. (FWIW `max_execution_time` was set to 30s, I'll explore and experiment along those lines). – slifty Dec 28 '15 at 03:56

4 Answers4

26

It turns out that in addition to timeout there is an expire setting defined in config/queue.php

    'database' => [
        'driver' => 'database',
        'table' => 'jobs',
        'queue' => 'default',
        'expire' => 60,
    ],

Changing that to a higher value did the trick.


UPDATE: This parameter is now called retry_after

    'database' => [
        'driver' => 'database',
        'table' => 'jobs',
        'queue' => 'default',
        'retry_after' => 60,
    ],
slifty
  • 13,062
  • 13
  • 71
  • 109
  • 1
    Thank you sir, this helps me solve my problem. But as pointed out by David's answer, "expire" now called "retry_after" – Afif Zafri Nov 15 '19 at 06:31
  • I have implemented the queue for importing huge CSV, I will not go to the server every time to check the progress PHP artisan queue:work how to handle this case? @slifty? – Shashank Shah Dec 08 '21 at 06:58
13

Important note: "expire" is now called "retry_after" (Laravel 5.4)

David Vielhuber
  • 3,253
  • 3
  • 29
  • 34
6

This will work

php artisan queue:listen --timeout=1200

Adjust the time based on your need

Munna Khan
  • 1,902
  • 1
  • 18
  • 24
  • I have implemented the queue for importing huge CSV, I will not go to the server every time to check the progress PHP artisan queue:work how to handle this case? @Munna Khan – Shashank Shah Dec 08 '21 at 06:58
  • You can set a scheduler to keep the queue worker alive. Another way to configure supervisor on your server. – Munna Khan Dec 08 '21 at 12:05
  • Yes, perfect thanks for the help! Supervisor should keep queue alive running. – Shashank Shah Dec 08 '21 at 12:10
  • NOTE: If this is null (aka not set) there will not be a timeout for the QUEUE WORKER. The JOB timeout defaults to 60 seconds. That can be changed in the job which is not what the answer does. This answer looks to be conflating the two timeouts. – jonlink Jun 22 '23 at 18:12
0

In my case, I am using Symfony\Component\Process\Process. I have to set timeout as following as well.

$process = new Process([...]);
$process->setTimeout(null);
Soli
  • 484
  • 5
  • 8