2

I bundle 115 fetch requests into a Promise.all in order to load mandatory resources. The problem is that they all fire at once and - depending on the server I test it on - either freeze the script entirely or give off a 500 error code.

I implemented a time delay in these requests so that they don't fire all at once, the minimum I manage to set being 50ms which adds up to 5.75 seconds of loading time.

  • should I create a new API endpoint which bundles these requests? (I'd rather not to be honest, caching each request separately after they loaded is a huge bonus)
  • is there a way to use one connection for all these requests so that it doesn't look like many individual requests to the server?
  • is there a way to make the server wait instead of handling all these requests at once?

I am also curious to know how the browser handles this problem since a website can easily want to load more that 100 different resources at a time. I'd love for the browser to handle my many requests in a 'waterfall' manner similar to what's shown in Chrome's web developer tools.

Note that I do NOT want to wait for each fetch request to completely finish before starting another, I just want the requests not to be sent at the same time.

I must mention that I am using a PHP API on an Apache server and a mySQL data base.

let offset = 100
let delay = 0
let requests = ['apiRequest0','apiRequest1','apiRequest2','apiRequest3'...]
let data = {}

await (async() => {
    promises = []
    requests.forEach(function(item){
        promises.push(
            (async() => {
                await new Promise(function(resolve) { setTimeout(resolve, delay); });
                data[item] = await fetch(`/api/${item}`).then(response => response.json())
            })()
        )
        delay += offset
    })
    return await Promise.all(promises)
})();
  • I tried using keep alive in the fetch requests - no changes
  • I tried setting execution time and memory limit to higher values - no changes
  • I tried adding a delay, which works, but I'd love to have a faster and more reliable solution instead of guessing what the server can cope with
Tim
  • 71
  • 6
  • 2
    IMHO 115 request is way TOO much – Arnau Nov 08 '22 at 11:19
  • Your question is the classic example of why I've always said `Promise.all` should come with a warning sticker.. :) A better option would be a promise map with concurrency, a quick look on npm, -> https://www.npmjs.com/package/p-map – Keith Nov 08 '22 at 11:21
  • I would go for the approach for building an semantic api which wraps multiple requests into one request. E.g the first 20 calls, are about the same "domain", the create an api for this "domain" and call only once this api, which returns the same as the 20 requests. As you said, the SERVER is freezing. Its more about the management on the serverside than the clientside. – Burak Ayyildiz Nov 08 '22 at 11:22
  • "_I tried adding a delay, which works, but I'd love to have a faster and more reliable solution instead of guessing what the server can cope with_" if all your api calls target the same server, then I think it really is a matter of what the server can cope with. – GrafiCode Nov 08 '22 at 11:26
  • `since a website can easily want to load more that 100 difference resources at a time`...actually no, that sounds horrendously inefficient. Look at a redesign of the API and the client-side. No way you should be trying to fetch that amount of stuff in separate requests. How did you end up with a scenario like that? Either you're cramming way too much stuff into one page, more than a user would need or be able to cope with, and/or your API is only allowing piecemeal downloads of info it ought to offer a more bulk-load method for. – ADyson Nov 08 '22 at 12:25
  • A grocery store can handle a thousand people at once, but they will be bumping into each other and take 'forever' to finish shopping. – Rick James Nov 08 '22 at 21:36

1 Answers1

0

Following the consensus I merged my 115 requests down to 5 with new API endpoints, which solved my problem.

Thanks a lot!

Tim
  • 71
  • 6
  • I personally think the consensus was wrong!!, Reasons: 1. Your altering the logic of your code for no other reason that your not handling concurrency correctly. 2. Batch requests are meant to solve problems of latency, but there are far better ways to do it without altering the logic of your code. eg. I use sockets that automatically sequences requests with debounce, similar to HTTP2/SPDY. 3. By batching your requests you have now taken away the feature to cache individual requests, and that has a massive effect on performance. – Keith Nov 09 '22 at 11:48
  • No.4 By batching you have also increased the memory size for the payload, yes, this can be mitigated by streaming, but some things don't stream that great and add extra complexities anyway. eg. Say you was downloading 115 json's @ 100K, you now batch into 5 separate ones, that's a payload of 2.3meg each, instead of 100K for each request. IOW: lots of small payloads are not a bad thing, and indeed one of the newest frameworks QWIK that score very high on google pageSpeed is based on that concept. – Keith Nov 09 '22 at 12:09