15

In using Node.js to query some public APIs via HTTP requests. Therefore, I'm using the request module. I'm measuring the response time within my application, and see that my application return the results from API queries about 2-3 times slower than "direct" requests via curl or in the browser. Also, I noticed that connections to HTTPS enabled services usually take longer than plain HTTP ones, but this can be a coincidence.

I tried to optimize my request options, but to no avail. For example, I query

https://www.linkedin.com/countserv/count/share?url=http%3A%2F%2Fwww.google.com%2F&lang=en_US

I'm using request.defaults to set the overall defaults for all requests:

var baseRequest = request.defaults({
    pool: {maxSockets: Infinity},
    jar: true,
    json: true,
    timeout: 5000,
    gzip: true,
    headers: {
        'Content-Type': 'application/json'
    }
});

The actual request are done via

...
var start = new Date().getTime();

var options = {
    url: 'https://www.linkedin.com/countserv/count/share?url=http%3A%2F%2Fwww.google.com%2F&lang=en_US',
    method: 'GET'
};

baseRequest(options, function(error, response, body) {

    if (error) {
        console.log(error);
    } else {
        console.log((new Date().getTime()-start) + ": " + response.statusCode);
    }

});

Does anybody see optimization potential? Am I doing something completely wrong? Thanks in advance for any advice!

Tobi
  • 31,405
  • 8
  • 58
  • 90
  • Are you doing the request from your node.js code and the curl request from the same machine? – Tristan Foureur Apr 13 '15 at 09:19
  • @TristanFoureur Yes, I do. I think this behavior is possbily caused by some request options, but I can't seem to find out which options to choose to get the optimal performance. – Tobi Apr 13 '15 at 09:22
  • I just tried with your code and without changing anything. Got a 545ms avg response time with your code and a 550ms avg response time with multiple curl calls. – Tristan Foureur Apr 13 '15 at 09:25
  • To give a little more detail, I'm running multiple worker processes for http requests to public APIs over a RabbitMQ-backed distributed RPC system. That means there can be hundreds of concurrent "open" requests per node process. I see the delays under higher workloads, simple (low numbers of) requests work fine... So, probably there are some tweaks for the request options necessary I guess... – Tobi Apr 13 '15 at 10:01
  • Then you might want to have a look at [hyperquest](https://github.com/substack/hyperquest) it might be helpful for you. Also personally when I have to do a lot of requests like this I use some job queues with X workers to make sure that I stay below X concurrent requests. – Tristan Foureur Apr 13 '15 at 10:40
  • That looks promising upon first sight... Definitely will check it out. Thanks! – Tobi Apr 13 '15 at 10:49

2 Answers2

12

There are several potential issues you'll need to address given what I understand from your architecture. In no particular order they are:

  • Using request will always be slower than using http directly since as the wise man once said: "abstraction costs". ;) In fact, to squeeze out every possible ounce of performance, I'd handle all HTTP requests using node's net module directly. For HTTPS, it's not worth rewriting the https module. And for the record, HTTPS will always be slower than HTTP by definition due to both the need to handshake cryptographic keys and do the crypt/decrypt work on the payload.
  • If your requirements include retrieving more than one resource from any single server, assure that those requests are made in order with the http KeepAlive set so you can benefit from the already open socket. The time it takes to handshake a new TCP socket is huge compared to making a request on an already open socket.
  • assure that http connection pooling is disabled (see Nodejs Max Socket Pooling Settings)
  • assure that your operating system and shell is not limiting the number of available sockets. See How many socket connections possible? for hints.
  • if you're using linux, check Increasing the maximum number of tcp/ip connections in linux and I'd also strongly recommend fine tuning the kernel socket buffers.

I'll add more suggestions as they occur to me.

Update

More on the topic of multiple requests to the same endpoint:

If you need to retrieve a number of resources from the same endpoint, it would be useful to segment your requests to specific workers that maintain open connections to that endpoint. In that way, you can be assured that you can get the requested resource as quickly as possible without the overhead of the initial TCP handshake.

TCP handshake is a three-stage process.

Step one: client sends a SYN packet to the remote server. Step two: the remote server replies to the client with a SYN+ACK. Step three: the client replies to the remote server with an ACK.

Depending on the client's latency to the remote server, this can add up to (as William Proxmire once said) "real money", or in this case, delay.

From my desktop, the current latency (round-trip time measure by ping) for a 2K octet packet to www.google.com is anywhere between 37 and 227ms.

So assuming that we can rely on a round-trip mean of 95ms (over a perfect connection), the time for the initial TCP handshake would be around 130ms or SYN(45ms) + SYN+ACK(45ms) + ACK(45ms) and this is a tenth of a second just to establish the initial connection.

If the connection requires retransmission, it could take much longer.

And this is assuming you retrieve a single resource over a new TCP connection.

To ameliorate this, I'd have your workers keep a pool of open connections to "known" destinations which they would then advertise back to the supervisor process so it could direct requests to the least loaded server with a "live" connection to the target server.

Community
  • 1
  • 1
Rob Raisch
  • 17,040
  • 4
  • 48
  • 58
  • Wow, thanks a lot for your extensive answer. I'll try to test this over the Weekend. – Tobi Apr 16 '15 at 20:34
  • Thanks again for your answer. I think the `Keep-Alive` header will probably have to most impact, aside from disabling the http connection pooling from Node. Unfortunately, I can't really partition the requests by requested endpoint, because I want to evenly distribute the load and therefore do a round-robin on my RPC workers via RabbitMQ. But good idea anyway! – Tobi Apr 20 '15 at 06:36
5

Actually, I have some new elements good enough to open a real answer. Having a look at the way request uses the HTTP agent can you please try the following :

var baseRequest = request.defaults({
    pool: false,
    agent: false,
    jar: true,
    json: true,
    timeout: 5000,
    gzip: true,
    headers: {
        'Content-Type': 'application/json'
    }
});

This will disable connection pooling and should make it a lot faster.

Tristan Foureur
  • 1,647
  • 10
  • 23
  • It will you're right, and it is a trade-off for overall speed on the high number of requests he's making, this is why I'm only asking him to try it and see how it behaves in his particular usecase – Tristan Foureur Apr 16 '15 at 03:11