88

I understood that both of them disable Nagle's algorithm.

When should/ shouldn't I use each one of them?

Whymarrh
  • 13,139
  • 14
  • 57
  • 108

4 Answers4

109

First of all not both of them disables Nagle's algorithm.

Nagle's algorithm is for reducing more number of small network packets in wire. The algorithm is: if data is smaller than a limit (usually MSS), wait until receiving ACK for previously sent packets and in the mean time accumulate data from user. Then send the accumulated data.

if [ data > MSS ]
    send(data)
else
    wait until ACK for previously sent data and accumulate data in send buffer (data)
    And after receiving the ACK send(data)

This will help in applications like telnet. However, waiting for the ACK may increase latency when sending streaming data. Additionally, if the receiver implements the 'delayed ACK policy', it will cause a temporary deadlock situation. In such cases, disabling Nagle's algorithm is a better option.

So TCP_NODELAY is used for disabling Nagle's algorithm.

TCP_CORK aggressively accumulates data. If TCP_CORK is enabled in a socket, it will not send data until the buffer fills to a fixed limit. Similar to Nagle's algorithm, it also accumulates data from user but until the buffer fills to a fixed limit not until receiving ACK. This will be useful while sending multiple blocks of data. But you have to be more careful while using TCP_CORK.

Until 2.6 kernel, both of these options are mutually exclusive. But in later kernel, both of them can exist together. In such case, TCP_CORK will be given more preference.

Ref:

ismail
  • 46,010
  • 9
  • 86
  • 95
theB
  • 2,048
  • 2
  • 20
  • 29
  • 9
    Keep in mind Hussein Galal's answer which clarifies that TCP_CORK only delays a maximum of 200 ms before sending data. – b4hand Mar 07 '16 at 22:04
  • 2
    "This will help in applications like telnet."? Rather the contrary is true. If you press a key, this will delay sending your keypress to the other side until an ACK for the last keypress has been received. This introduces high delay between key press and key sent and I wouldn't know of any situation where this is desirable. – Mecki May 18 '20 at 12:24
33

TCP_NODELAY

Used to disable Nagle's algorithm to improve TCP/IP networks and decrease the number of packets by waiting until an acknowledgment of previously sent data is received to send the accumulated packets.

//From the tcp(7) manual:

TCP_CORK (or TCP_NOPUSH in FreeBSD)

If set, don't send out partial frames. All queued partial frames are sent when the option is cleared again. This is useful for prepending headers before calling sendfile(2), or for throughput optimization. As currently implemented, there is a 200-millisecond ceiling on the time for which output is corked by TCP_CORK. If this ceiling is reached, then queued data is automatically transmitted. This option can be combined with TCP_NODELAY only since Linux 2.5.71. This option should not be used in code intended to be portable.

zangw
  • 43,869
  • 19
  • 177
  • 214
Hussein Galal
  • 449
  • 4
  • 4
  • 7
    Thankyou for pointing out what many guides have gotten totally wrong, that TCP_CORK only delays for 200ms (max), it's not literally a CORK that can jam until removed. – Orwellophile May 18 '15 at 04:40
9

It's an optimisation, so like any optimisation:

  1. Do not use it
  2. Wait until performance becomes a problem, then having determined that socket latency is definitely the cause of it, and testing proves that this will definitely fix it, AND this is the easiest way of fixing it, do it.

Basically the aim is to avoid having to send out several frames where a single frame can be used, with sendfile() and its friends.

So for example, in a web server, you send the headers followed by the file contents, the headers will be assembled in-memory, the file will then be sent directly by the kernel. TCP_CORK allows you to have the headers and the beginning of the file sent in a single frame, even with TCP_NODELAY, which would otherwise cause the first chunk to be sent out immediately.

MarkR
  • 62,604
  • 14
  • 116
  • 151
  • 59
    Nagle itself is an optimisation, so by your logic you should turn it off and only put it on if needed :-) – camh Sep 22 '10 at 05:48
  • 3
    Nagle is enabled by default and you don't need to write any code to enable it, so it will happen anyway. And no, if you were writing your own TCP stack, if you didn't need to implement Nagle, you wouldn't do so. – MarkR Sep 22 '10 at 12:38
  • 8
    I wouldn't be surprised if that actually happened in a few years from now (someone no longer implementing it). The main concern some 30 or 40 years ago was that people typing on telnet at roughly 2 characters per second would generate one packet for every character. This is hardly an issue nowadays with bandwitdth being much higher, remote login not playing a big role traffic-wise, and block ciphers being applied to pretty much every remote login traffic anyway. There's no way you can send less than 16 bytes with a 128-bit block cipher (not if you want to decode it on the other end, anyway). – Damon Nov 15 '13 at 10:49
  • 1
    @camh I know you were kidding, but in defense of OP, the act of disabling Nagle is *sometimes* an optimization in the *latency* variable. – Mateen Ulhaq Oct 29 '19 at 01:09
  • 1
    @camh Honestly I read Mark's advice initially as "use NODELAY until you determine that you need delay" precisely because Nagle is the "optimization" in my mind. It was definitely a vague suggestion in this context. – Dan Bechard Mar 16 '21 at 05:33
-4

TCP_CORK is the opposite of TCP_NODELAY. The former forces packet-accumulation delay; the latter disables it.

fche
  • 2,641
  • 20
  • 28
  • 21
    `TCP_CORK` is not the opposite of `TCP_NODELAY`. Nagle's algorithm aggregates data while waiting for a return ACK, which the latter option disables; the former aggregates data based on buffer pressure instead. – joshperry Sep 22 '15 at 00:52