1

I'm writing a script to scan the /config route on some ports on a host.

I have first written the script in node.js and am now porting it to bash to achieve less dependencies.

Why is the bash script more than 300 times slower when scanning localhost. What am I missing?

I guess there are some optimizations built in node-fetch. How can I achieve the same in bash?

  • node.js: 10 Ports -> 79ms
  • bash(0.0.0.0): 10 Ports -> 2149ms
  • bash(localhost): 10 Ports -> 25156ms

I found out that in bash when using 0.0.0.0 instead of localhost, it is only 27 times slower, but still... (In node.js, using 0.0.0.0 does not make a significant difference.)

node.js (IFFE omitted for readability)

import fetch from 'node-fetch';
for (let port = portFrom; port <= portTo; port++) {
  try {
    const res = await fetch("http://localhost" + port + '/config');
    const json = await res.json();
    console.log(json);
  } catch {/*no-op*/}
}

bash

for ((port=$port_from; port<=$port_to; port++))
  do
    json="$(curl -s http://localhost:$port/config)"
    echo "$json"
done
BoldSheep
  • 21
  • 2
  • 2
    You are missing for example, that in the `bash` variant you are creating and terminating a new process in the OS every time. That's quite expensive on the OS level. With `node` you only start a process once and then do everything else within this process ... – derpirscher Nov 11 '22 at 13:46
  • Not entirely sure with the last measurement. This problably has some other reason. Ten calls of `curl` shouldn't take 25 seconds ... How often did you repeat those tests? – derpirscher Nov 11 '22 at 13:54
  • And just as a sidemark: using `0.0.0.0` as a *target* for your request doesn't make much sense. It might be resolved to localhost on linux systems. But technically `0.0.0.0` is only supported as a *source address* when binding a socket to an interface of the machine ... – derpirscher Nov 11 '22 at 14:11
  • Ah okay, so what is `node` then actually doing in the background? I thought it would just call `curl` or `wget` or something similar? Is it also possible to achieve this with bash or with something else without many requirements in a docker container environment? I just repeated the last measurements and it took 4m8,748s for 100 calls. Note however that the call will fail/timeout most of the time. Do you have any idea why it takes so much longer when using `localhost`? – BoldSheep Nov 11 '22 at 16:34
  • 1
    For doing multiple requests with curl see for instance https://stackoverflow.com/questions/3110444/how-can-i-run-multiple-curl-requests-processed-sequentially Any particular reason you are insisting on using a technically invalid address? If you want to connect to `localhost` use either `localhost` or the defined IP address `127.0.0.1` – derpirscher Nov 11 '22 at 17:08
  • If most of your requests run into a timeout, it's no wonder that it takes quite an amount of time. Hard to say why, without knowing your environment – derpirscher Nov 11 '22 at 17:10

0 Answers0