2

I'm getting this annoying error 429 for too many requests while sending multiple requests. What's even more disgusting is it limits amount of product to 10 per request as it seems.

So I have a code that breaks down my asin array into groups of 10s and chains it into multiple requests however, when I set 1 second wait after start of every request before making a new one, it doesn't work reliably and still returns an error, increasing the number to 2 sec per request solves this but makes it too slow. (Because usually it takes 0.5 sec per request and waits the remaining 1.5 seconds).

Amazon doesn't have any documentation on how exactly those limits work so we can only guess.

Is there way to improve it further or make something different with the queuing?

$all_posts = get_posts(array(
    'posts_per_page'   => -1
));
$serialized = serialize($all_posts);
preg_match_all ( "/]([^\]]*?)\[\/asa\]/" , $serialized , $matches);
$amazon_items=$matches[1];  //here we get an array of asins

$time_end=microtime(true);
$time_start=0;

$out=array();
for ($i=0;$i<count($amazon_items);$i+=10){
    $arr=array();
    for ($j=0;$j<10&&$i+$j<count($amazon_items);$j++){
        $arr[]=$amazon_items[$i+$j];
    }
    if ($time_end-$time_start<2) {
        echo 'sleeping '.(2-($time_end-$time_start)). 'sec; ';
        sleep (2-($time_end-$time_start));
    }
    $time_start = microtime(true);
    $list = GetItems($arr);
    $time_end = microtime(true);
    echo $time_end-$time_start.' sec, ',PHP_EOL;

    $out = array_merge($out, $list); 
}
aynber
  • 22,380
  • 8
  • 50
  • 63
Anonymous
  • 4,692
  • 8
  • 61
  • 91
  • The [Amazon MWS product API documentation is here](https://docs.developer.amazonservices.com/en_UK/products/Products_Throttling.html), but I'm not sure if that is the one you are using. On a sidenote you could use php's [array_chunk](https://www.php.net/manual/en/function.array-chunk.php)() function for chunking the stuff you iterate. – Peter Rakmanyi Jan 28 '20 at 16:07
  • I'm using the new Paapi 5.0 of course, the page you are referring to appears to describe an old version from 2011. Do you think it's still relevant? – Anonymous Jan 28 '20 at 16:14

2 Answers2

1

I found the Product Advertising API 5.0 documentation here. This page explains the issue you are having with the allowed rates.

There is a reasonable daily limit, but if every visit to your site triggers multiple calls, you will quickly run out of your qouta or make too frequent calls. Throttling is necessary, otherwise some bad code at a third party could ddos the api provider.

Without knowing more about the way you intend to use this api, I suggest you set up a microservice backend, that caches the data from the api and has a queue to the original. Then you can query your own api as much as you want.

Peter Rakmanyi
  • 1,475
  • 11
  • 13
  • Thanks for the documentation, well I only have one cron job that triggers every few hours and execute a single instance of this script - that's the only time I am using it. It clearly says though the limit is 1 request per second, and my script with microtime makes it damn sure I have waited enough time from the beginning of the previous request. But in reality, until I increase the interval up to 2 second, the error still persist. – Anonymous Jan 30 '20 at 17:14
  • @Anonymous Maybe the problem is the network latency. It takes a varying amount of time for your request to get there and then get evaluated on the other side, this is the time the server thinks it is from, which is slightly later then when you generated and sent it. If the latency doesn't change and the server code is a bit more lenient, then it should work. But if one request is more delayed, it and the next one can arrive closer than one second to each other and trigger the error. – Peter Rakmanyi Jan 31 '20 at 16:21
  • @Anonymous You should ping the server to check the latency, increse the delay a bit (1050ms instead of 1000ms) and see how often the error happens. Find a value that doesn't trigger the error too often and use that. Add a try catch block that waits and calls itself with the same position in the loop, so it only waits longer when the error happens. – Peter Rakmanyi Jan 31 '20 at 16:30
0

I think I got the solution, it seems paapi 5 is especially sensitive to the very first #1 and #2 request. There a good pause should be maintained, after which the request speed can be increased up to the nominal 1 per sec.

So the loop should go something like this: 1st request, pause to make total of 2 seconds since the start of request, 2nd request, pause to make total of 1 seconds since the start of request, 3rd request, pause to make total of 1 seconds since the start of request, 4th request, ...etc

I suspect it might also work with holding 1 whole second AFTER the end of the first request instead of 2 seconds total, before switching to 1 sec total wait loop counted from the beginning of the requests, but I am yet to try this out in order to verify. Obviously if the first request exceeds 1 second and it throws an error then I'll know this is the case.

Edit: it worked for an hour then stopped working again, I am going to change to 1 sec after every request just to be sure. This is really retarded because there is no way to tell what time exactly does it record on their server side, it is neither the starting time of the request nor the end time of the request. It seems as though it can arbitrary fluctuate between the two, and whenever 1 second rule is violated (which happens at random) it starts throwing countless 429 errors. Ok I'm going to simply wait 1 second and hope my requests are slightly faster than 1 sec themselves, which should give me some small advantage and reliability vs waiting full 2 seconds every time, like in my initially posted example.

Anonymous
  • 4,692
  • 8
  • 61
  • 91
  • Did you find how many items are allowed in an single request with GetItems operation ? Is it 10? – mlg Jan 20 '23 at 13:18