5

I have a job that times out and once it fails it dispatches another one just like itself so that it can run infinitely and without overlapping. However the job that fails stays in the queue and gets re-tried so I eventually have more than one job running which breaks the whole purpose.

Here is how I handle the failing of the job:

use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

public function handle()
{
    //infinite websocket listening loop
}

public function failed(Exception $exception)
{
    $this::dispatch()->onQueue('long-queue');
    $this->delete();
}

$this->delete() comes from InteractsWithQueue trait. What am I doing wrong?

Edit: I am using horizon for running the jobs, here is the configuration for the custom queue set in config/horizon.php

'supervisor-long' => [
    'connection' => 'redis-long',
    'queue' => ['long-queue'],
    'balance' => 'simple',
    'processes' => 3,
    'tries' => 1,
    'timeout' => 3600,
],

The job that I am dispatching is creating a Thruway client to connect to a web-socket server and subscribe to a channel for updates so I want to run this job forever, but only run one instance of this job at any time. That is why I want it to run once without any tries and once it times out to dispatch another instance that will run and the loop will continue. I couldn't think of a better way to achieve this, is there another way to do this better?

PU2014
  • 323
  • 5
  • 17
  • 1
    why in the failed function you dispatch again the job? – Giacomo M Jul 04 '19 at 13:37
  • I assume that after first attempt you want to move your job to another queue (long-queue). Jobs are still running on default queue because you don't set max numbers of tries. You can change you queue command like so `php artisan queue:work --tries=1` after that your jobs will be moved to failied_jobs table after first attempt. – Milena Grygier Jul 04 '19 at 13:40
  • What is the difference between what you are trying to do and what Laravel is already doing? – jeroen Jul 04 '19 at 13:40
  • Sum of all these comments, what is wrong with using the built in retry functionality? – mrhn Jul 04 '19 at 13:43
  • @MartinHenriksen Sometimes you may want to move failed jobs to another queue for example queue with some delay or with longer timeout – Milena Grygier Jul 04 '19 at 13:53
  • This example is fairly simple thou, but i did not get the fact that it was on a different queue – mrhn Jul 04 '19 at 13:56
  • I have updated the question to provide more information on the context and provide the horizon configuration for the custom queue – PU2014 Jul 04 '19 at 20:47

3 Answers3

3

Found you can actually do this in the failed method in a queued job:

/**
 * Handle the failing job.
 *
 * @param Exception $ex
 *
 * @return void
 */
public function failed(Exception $ex)
{
    $this->delete();
}
Steve Bauman
  • 8,165
  • 7
  • 40
  • 56
2

The reason why failed is not executed, it is only triggered when a job exceeds it's maximum tries. The flow looks something like this.

$job->dispatch(); // try 1
// times out
// retries on try 2 now
// times out
// retries on try 3 now
// max attempt is hit and MaxAttempt exception is thrown
// failed is called

This logic is changed if your job actually crashes, this example is only when it runs indefinitely. Where the logic is handled.

On your queue definition in config/horizon.php you can define tries.

'my-short-queue' => [
    'connection' => 'redis',
    'queue' => ['my-short-queue'],
    'balance' => 'simple',
    'processes' => 1,
    'tries' => 1,
]
mrhn
  • 17,961
  • 4
  • 27
  • 46
  • So you can choose the solution to limit the retry amount, Milena's comment works great there. – mrhn Jul 04 '19 at 14:02
  • I am using laravel-horizon to process the queue, so Milena's solution is not applicable – PU2014 Jul 04 '19 at 21:04
0

Turns out that my jobs were not failing so the failed() method was not getting executed. Even if you set tries to tries => 1 within your config/horizon.php file, you need to set the retry_after value to 0 within your config/queue.php file so that the job fails just after it times out. This way your failed() methods gets called immediately. Below you can find the final forms of my config files.

config/queue.php:

'redis-long' => [
    'driver' => 'redis',
    'connection' => 'default',
    'queue' => 'long-queue',
    'retry_after' => 0,
    'block_for' => null,
],

config/horizon.php:

'supervisor-long' => [
    'connection' => 'redis-long',
    'queue' => ['long-queue'],
    'balance' => 'simple',
    'processes' => 1,
    'tries' => 1,
    'timeout' => 3600,
],
PU2014
  • 323
  • 5
  • 17
  • Usually the standard is to set retry_after to more than the timeout, so you secure that your job has a chance to execute. I'm not certain that retry_after 0 is a good idea. – mrhn Jul 05 '19 at 07:26
  • I do not want the job to retry at any point and not setting `retry_after => 0` adds a delay to the whole process and that delay has to pass until the `failed()` function is called. This configuration yields the desired behaviour. – PU2014 Jul 12 '19 at 11:14