1

Consider a function to make a HTTP request until it succeeds:

function fetch_retry(url) {
    return fetch(url).catch(function(error) {
        return fetch_retry(url);
    });
}

As the number of failure increases, does the memory consumption increase?

I guess the callstack height is O(1) but I don't know if the clojure context (growing over time) is retained.


Edit:

Case2

function fetch_retry(url) {
    return fetch(url).catch(function(error) {
      //return fetch_retry(url)+1;
        return fetch_retry(url).then(res=>{res.body+='a'});
    });
}

(assume that fetch resolves to a number) The constant 1 should be in memory because it is used later.

Case3

function fetch_retry(url, odd_even) {
    return fetch(url).catch(function(error) {
      //return fetch_retry(url, 1-odd_even)+odd_even;
        return fetch_retry(url, 1-odd_even).then(res=>{res.body+=str(odd_even)});
    });
}

fetch_retry('http://..', 0)

By alternating 0 and 1, the compiler cannot reuse the operand.


Edit:

No one explained about the case2 and 3, so I run the experiment. Surprisingly the memory increases in all scenarios.

function f(n) {
    return new Promise(function (resolve, reject) {
        gc();
        if(n==10000) return resolve(0);
        // else return f(n+1).then(value=>value+n);
        // else return f(n+1).then(value=>value);
        else return resolve(f(n+1));
    });
}

f(0).then(res => {
    console.log('result: ', res);
    const used = process.memoryUsage().heapUsed / 1024 / 1024;
    console.log(`memory: ${Math.round(used * 100) / 100} MB`);
}).finally(()=>console.log('finished'));

And run with

node --stack_size=999999 --expose_gc test.js

Note that I ran gc() every time to prevent delayed GC.

  • With n=1000, memory: 4.34 MB
  • With n=10000, memory: 8.95 MB

5MB for 9000 stacks = 590 bytes per call.

One possibility is that resolve function in each stack is retained. To remove this,

function f(n) {
    return new Promise(function (resolve, reject) {
        gc();
        if(n==10000) resolve(0);
        // else return f(n+1).then(value=>value+n);
        else return f(n+1).then(value=>value);
        // else return f(n+1);

        const used = process.memoryUsage().heapUsed / 1024 / 1024;
        console.log(`memory: ${Math.round(used * 100) / 100} MB`);
    });
}
f(0);
  • With n=1000, memory: 4.12 MB
  • With n=10000, memory: 7.07 MB

So the stack is flat but the memory is not clean as we think?

mq7
  • 1,125
  • 2
  • 11
  • 21
  • 1
    Your edit is totally useless as it won't work that way. – Jonas Wilms Jun 13 '18 at 13:23
  • You can map the response with int((res)=>res.body) and make it working code, but the detail is not important here.. – mq7 Jun 13 '18 at 13:27
  • FYI, blindly retrying immediately with no delay and with no examining of the error to assess if it's appropriate to retry and with no max retry count and with no retry back-off is usually foolish. It can lead to avalanche failures where one repeating error causes everything to spin out of control hammering of all your resources so bad that nothing else can work any more when there was just one initial error somewhere. – jfriend00 Jun 13 '18 at 13:57
  • @jfriend00 It's good to add a delay or examine the error, or anything, but I wrote this code just to demonstrate an example of recursive call with promise. – mq7 Jun 13 '18 at 15:41
  • Closely related: [Building a promise chain recursively in javascript - memory considerations](https://stackoverflow.com/q/29925948/1048572) – Bergi Jun 13 '18 at 18:04

2 Answers2

1

As the number of failure increases, does the memory consumption increase?

It does not. Modern implementations of Promises have no memory of promises they were originated from. I can verify that this is the case in Firefox, Chrome, Safari and Edge.

'old' implementations had this problem (think Q). Modern browsers and fast libraries like bluebird 'detach' the reference after resolving.

That said, there is no reason to implement this recursively now, you can just do:

async function fetch_retry(url) {
  while(true) { // or limit retries somehow
    try {
      await fetch(url);
    } catch { }
  }
}

Note that retrying forever is a bad idea, I warmly recommend being friendlier to the backend and using exponential backoff when making retries and limiting the amount of retries.

Caveat

This will actually be hard to measure from your end since when the debugger is attached in V8 (Chrome's JavaScript engine) async stack traces are collected and in your recursive example memory will grow.

You can turn off async stack traces from the Chrome devtools (there is a checkbox).

This will leak a little bit (the stack frames) in development but not in production.

Other caveats

If fetch itself leaks memory on errors (which it has done before at times for certain types of requests) then the code will obviously leak. As far as I know all evergreen browsers fixed found leaks but there might be one in the future.

Benjamin Gruenbaum
  • 270,886
  • 87
  • 504
  • 504
0
return fetch(url).catch(function(error) {
    return fetch_retry(url);
    // Function exits, losing reference to closure
    // Closure gets garabge collected soon
});

When the callback (catch) got called, it will loose the reference to the catched function, therefore the functions closure will get carbage collected. Only the outer promise and the one that is currently pending stay there.

As the number of failure increases, does the memory consumption increase?

No.

Jonas Wilms
  • 132,000
  • 20
  • 149
  • 151