2

I am using Laravel Horizon and Redis and I am trying to throttle it. I am using an external API that has a rate limit of 100 requests per minute. I need to make about 700 requests. I have it setup so that every job I add to the queue only performs one API call in the job itself. So if I throttle the queue I should be able to stay within the limits. For some reason no throttling is happening and it instead blows through the queue (thus triggering many API errors of course). However the throttle works locally just not on my server.

I was originally trying to throttle per Laravel's queue documentation but could only get it to work locally so I swapped to trying out the laravel-queue-rate-limit package on Github. As per the README I added the following to my queue.php config file:

'rateLimits' => [
        'default' => [ // queue name
            'allows' => 75, // 75 job
            'every' => 60 // per 60 seconds
        ]
    ],     

For some reason the throttling works properly when I run it on my local Ubuntu environment, but it does not work on my server (also Ubuntu). On the server it just blows through the queue as if there is no throttle in place.

Is there something I am doing wrong or maybe a better way to handle a rate limited external API?

Edit 1:

config/horizon.php

    'environments' => [
        'production' => [
            'supervisor-1' => [
                'connection' => 'redis',
                'queue' => ['default'],
                'balance' => 'simple',
                'processes' => 3,
                'tries' => 100,
            ],
        ],

One of the handle's that starts most jobs:

    public function handle()
    {
        $updatedPlayerIds = [];
        foreach ($this->players as $player) {
            $playerUpdate = Player::updateOrCreate(
                [
                    'id' => $player['id'],
                ],
                [
                    'faction_id' => $player['faction_id'],
                    'name' => $player['name'],
                ]
            );

            // Storing id's of records updated
            $updatedPlayerIds[] = $playerUpdate->id;

            // If it's a new player or the player was last updated awhile ago, then get new data!
            if ($playerUpdate->wasRecentlyCreated ||
                $playerUpdate->last_complete_update_at == null ||
                Carbon::parse($playerUpdate->last_complete_update_at)->diffInHours(Carbon::now()) >= 6) {
                Log::info("Updating '{$playerUpdate->name}' with new data", ['playerUpdate' => $playerUpdate]);
                UpdatePlayer::dispatch($playerUpdate);
            } else {
//                Log::debug("Player data fresh, no update performed", ['playerUpdate' => $playerUpdate]);
            }
        }

        //Delete any ID's that were not updated via the API
        Player::where('faction_id', $this->faction->id)->whereNotIn('id', $updatedPlayerIds)->delete();
    }

Also, here is a rough diagram I made trying to illustrate how I have multiple job PHP files that end up spawned in a short amount of time, especially ones like the updatePlayer() which are often spawned 700 times.

diagram

ComputerLocus
  • 3,448
  • 10
  • 47
  • 96
  • Could try adding another key - value pair to the `default` array - `block => 60`? This is perhaps a timeout issue where a timeout is kicking in after 3 secs and, immediately, another callback is being invoked. – Qumber Jun 30 '20 at 17:51
  • Also, it looks like the aforementioned package doesn't work with Horizon. You're probably better off with Laravel's queue. In Laravel's queue add this to `Redis::throttle('key' )` - `->block(60)`. – Qumber Jun 30 '20 at 18:09
  • @Qumber Adding the blocking looks to have fixed it! I also swapped back to the normal Laravel throttling you mentioned. Definitely prefer using the built-in methods anyway. – ComputerLocus Jun 30 '20 at 23:12
  • @Qumber However the now the issue seems to be that they often just sit in the queue for quite awhile. I added a middleware that is applying this throttle: `Redis::throttle('torn-api')->allow(75)->every(60)->block(60)`. I am trying to only allow 75 jobs to run per 60 seconds. – ComputerLocus Jun 30 '20 at 23:51
  • @Qumber In fact they seem to just get stuck in there and if I manually try to clear the queue using `php artisan queue:work` instead of via the superviserd worker I then get an error of `Illuminate\Queue\MaxAttemptsExceededException: App\Jobs\UpdatePlayer has been attempted too many times or run too long. The job may have previously timed out. in`. And I will see a bunch of attempts on them like 20 attempts. – ComputerLocus Jun 30 '20 at 23:54
  • Can you please paste the entire handle function in the question? Also, have you defined a timeout & a max retry in Horizon's config `(config/horizon.php)`? Maybe do that and log any exception using `failed` method in the job itself; like this - https://stackoverflow.com/a/53172981/10625611 – Qumber Jul 01 '20 at 06:24
  • @Qumber doing some of the things you mentioned and then running the queue right from the command line via `php artisan queue:work` seems much more reliable than the superviser method. I am considering just backgrounding the queue process instead of the supervisor since it seems too inconsistent. Also I edited some extra details into the OP – ComputerLocus Jul 02 '20 at 23:46
  • This just looks like config issue to me - give `retry_after` and `timeout` values in Horizon's config. Your `retry_after` value in config should always be greater than how much time it takes to do a job. And `timeout` value should be a few secs shorter than the `retry_after` value. See this [issue](https://github.com/laravel/horizon/issues/128) & this point in [doc](https://laravel.com/docs/6.x/queues#job-expirations-and-timeouts) – Qumber Jul 03 '20 at 06:31
  • 1
    @Qumber Great I think it is fixed now. As I assume you'll want to scoop up this bounty again I will lay out what I ended up changing overall. queue.php `retry_after` set to 70 In horizon.php `waits.redis:default` changed to 65, and the `timeout` changed to 60 (in the environments section). I also removed the various flags I had on my supervisord queue listeners so that only the config values are used. Also in my `RateLimited` middleware I added the `->block(60)` like you mentioned. And I swapped back to the regular throttling instead of using the package. – ComputerLocus Jul 04 '20 at 22:23
  • Glad to be of help. :) . I most definitely am after the bounty. :D Converting this into an answer. – Qumber Jul 05 '20 at 06:15

1 Answers1

5

It looks like the package you mention (laravel-queue-rate-limit) does not work well with Horizon. You're probably better off using Laravel's built-in method.
In Laravel's queue, add ->block(60) to match ->every(60) so that the default timeout does not kick in and invoke another callback before 60 seconds.

Redis::throttle('torn-api')->allow(75)->every(60)->block(60)

Also, consider adding timeout and max retry values to Horizon's config (config/horizon.php). You could also log any exception using the failed method in the job itself. See this.

Set retry_after and timeout values in Horizon's config. Your retry_after value in the config should always be greater than how much time it takes to do a job. And the timeout value should be a few seconds shorter than the retry_after value. "This will ensure that a worker processing a given job is always killed before the job is retried." See this issue and this point in the docs.

Qumber
  • 13,130
  • 4
  • 18
  • 33