4

I've written a piece of code which takes two argument, first being some URL and second is an integer for how many times the URL must get downloaded (I know there is no point downloading same URL again and again but this code is just a sample one and in actual code the URL is picked randomly from a database table) and as of now the code is written as a recursive function. Here is how my current code looks like,

const request = require("request");

function downloadUrl(url, numTimes) {
    if (numTimes > 0) {
        console.log(url, numTimes);
        request.get(url, function (err, resp, buffer) {
            if (err) {
                return err;
            }

            console.log(`MimeType: ${resp.headers['content-type']}, Size: ${buffer.length}, numTimes: ${numTimes}`);
            downloadUrl(url, --numTimes);
        });
    }
}

function main() {
    downloadUrl('http://somerandomurl', 5); // the URL here might get picked randomly from an array or a table
}

main();

What I want to know is, can this recursive code be written as an iterative code using a while or a for loop? I've tried writing following code,

function downloadUrl(url, numTimes) {
    for (let i = 0; i< numTimes; i++) {
        request.get(url, function (err, resp, buffer) {
            if (err) {
                return err;
            }

            console.log(`MimeType: ${resp.headers['content-type']}, Size: ${buffer.length}, numTimes: ${numTimes}`);
        });
    }
}

But this code seems to get executed in parallel which obviously it will because in Node.js the async code doesn't wait for the statement to complete before proceeding to the next statement unlike a programming language like Java.

My question is, is there a way I can write iterative codes to behave exactly like my recursive codes? My recursive codes executes sequentially where numTimes variable is decremented by one and gets printed sequentially from 5 to 1.

I've tried my best to keep my question clear but in case something is not clear or confusing, please feel free to ask.

Silvanas
  • 613
  • 3
  • 11
  • 2
    due to the asynchronous nature of http requests in node/js a callback is necessary thus recursion in general cannot be avoided – Nikos M. Aug 26 '19 at 18:49
  • @NikosM.: I too had this gut feeling but wasn't sure about it as I am new to Node.js. Your response helped me in confirming that I was probably right. Many thanks. – Silvanas Aug 26 '19 at 18:54

3 Answers3

4

I guess that you want your http request be ended to make another one, correct me if im wrong, but you can use await in your method.

const request = require('request');



async function downloadUrl(url, numTimes) {
    for (let i = 0; i< numTimes; i++) {
        const objToResolve = await doDownload(url);
        if(objToResolve.err){
            console.log(`Error: ${objToResolve.err}, try: ${i}`);   
        }else{
            console.log(`Size: ${objToResolve.buffer.length}, try: ${i}`);
        }
    }
}

// wrap a request in an promise
function doDownload(url) {
    return new Promise((resolve, reject) => {
        request(url, (err, resp, buffer) => {
            if (err) {
                reject({err});
            }else{
                resolve({err, resp, buffer});
            }
        });
    });    
}

// now to program the "usual" way
// all you need to do is use async functions and await
// for functions returning promises
function main() {
    console.log('main chamado');
    downloadUrl('http://www.macoratti.net/11/05/c_aspn3c.htm', 5);
}

main();

EDIT: By considering timeout you can handle better your requests

const request = require('request');



async function downloadUrl(url, numTimes) {
    for (let i = 0; i< numTimes; i++) {
        try{
            const objToResolve = await doDownload(url);
            if(objToResolve.err){
                console.log(`Error: ${objToResolve}, try: ${i}`);   
            }else{
                console.log(`Size: ${objToResolve.buffer.length}, try: ${i}`);
            }
        }catch(timeout){
            console.log(`Error: ${timeout}, try: ${i}`);  
        }

    }
}

// wrap a request in an promise
function doDownload(url) {
    const timeout = new Promise((resolve, reject) => {
        setTimeout(() => {
          reject(new Error('timeout'));
        }, 300);
      });
      const requestPromisse = new Promise((resolve, reject) => {
        request({uri:url, timeout:3000}, (err, resp, buffer) => {
            if (err) {
                reject({err});
            }else{
                resolve({err, resp, buffer});
            }
        });
    });
    return Promise.race([timeout,requestPromisse]);    
}

// now to program the "usual" way
// all you need to do is use async functions and await
// for functions returning promises
function main() {
    console.log('main called');
    downloadUrl('http://www.macoratti.net/11/05/c_aspn3c.htm', 5);
}

// run your async function
main();

Reference: Synchronous Requests in Node.js

Jose Leles
  • 178
  • 1
  • 10
  • I have experienced some issues that I don't know why, but sometimes the loop just stop – Jose Leles Aug 26 '19 at 19:20
  • 1
    Does it actually stop, or is one of the requests hanging? Try adding a `timeout` to your `request`. – mpen Aug 26 '19 at 19:22
  • that's right, I did catch this exception considering timeout, I'll edit my answer – Jose Leles Aug 26 '19 at 19:36
  • 1
    @JoseLeles: Perfect, this is exactly what I was looking for. Thank you very much. – Silvanas Aug 27 '19 at 05:10
  • 1
    using promises and the async/await feature of ES6 seems to provide for an iterative version, But in the end this is mostly syntactic sugar it does not transform the requests to synchronous – Nikos M. Aug 27 '19 at 10:31
  • thats real, when im calling downloadUrl() the code doesnt wait because downloadUrl is a async function but in node I can write a totalyl synchronous function? – Jose Leles Aug 27 '19 at 11:47
0

Every recursive code can be transformed in a non recursive ones :) So what does the recursive magic ? It just abuse the call stack to be a store for partial results. In fact you can build your own stack. Javascript make this very easy. You can use some arrays to store your partial results.

using shift() to remove the first item of an array.
Using pop() to Remove the last element of an array:
Using push() to add to the end of an array
Using unshift() to add to the beginning of an array
Using splice() to add elements within an array

So with those its very simple to build your own "url" stack. push and pop will be your best friends. instead of your recursion just push the url to the array as long as you can not download if you can download pop the url from the array.

The length of the array will give you all the time the stack counter. The job is done if your array has the length of 0 :) So in simple words: if you recognize that the "mess" to clean up becomes deeper push it to the array and if you can remove some "mess" do this tiny job and pop it from the array. Thats nothing else as the recursion does. But without the need to annoy the os or interpreter. In the good old days such call stacks was very limited. So this own stack building will break those limits. It might also be way more mem sufficient. Cause you only store whats really needed.

Thomas Ludewig
  • 696
  • 9
  • 17
-1

I get what you're asking for - I think you're looking for a generator. Basically you just want a controlled loop where you don't iterate to the next item until the first is totally complete doing it's business.

I mean behind the scenes it basically is still just a recursive-ish function - it just wraps it up to act like a sequential, controlled loop.

Kyle
  • 1,463
  • 1
  • 12
  • 18