0

Subj. Strangely, I couldn't readily find this in socket reference docs for Windows or POSIX.

For the purpose of the question, I'm talking about any timeouts affecting socket API calls, i.e. any values that govern the time after which an API call would return with an error. So, something like TIME_WAIT is ruled out because it only affects system state rather than a program's control flow. The question is inspired by kill socket.accept() call on closed unix socket where the OP claims that an accept would wait forever - which I don't believe.

  • AFAICS, there are two: for receive and for send, which affect not only send/recv, but all APIs that involve receiving or sending, like accept.

More specifically:

  • Is it mandated by some spec or is completely up to the OS vendor?
  • What are the default values for major OSes1? At the very least, the orders of magnitude.
    • If configurable system-wide, where are they stored (if there are many possibilities - from the kernel's/stock library's POV)?

1E.g. Windows, Debian, Red Hat, FreeBSD, Mac OS X, Android.

Community
  • 1
  • 1
ivan_pozdeev
  • 33,874
  • 19
  • 107
  • 152

1 Answers1

1

If you're talking about API actions in the BSD Sockets API or systems built on it or to resemble it, the accept, send, and receive default timeouts are infinite. This is mandated both by the BSD Sockets API and Winsock. Most implementations don't even let you change the send timeout.

user207421
  • 305,947
  • 44
  • 307
  • 483
  • Any reference links for these statements? – ivan_pozdeev May 08 '17 at 03:09
  • How can this be? [TCP has timeouts all right](http://www.pcvr.nl/tcpip/tcp_time.htm), so a socket _must_ [signal somehow after an appropriate time](http://stackoverflow.com/a/17665015/648265), no? What if I use [keep-alive](http://www.tldp.org/HOWTO/html_single/TCP-Keepalive-HOWTO/)? - then `recv` should detect a connection drop, too. – ivan_pozdeev May 08 '17 at 03:30
  • 1
    @ivan_pozdeev 1. My reference is the BSD Sockets *man* pages. 2. The internal TCP timeouts have nothing to do with it: you explicitly asked about things that don't affect system (actually connection) state, and that's what I answered about. TCP timeouts cause retries and ultimately resets internally. There is no internal TCP timeout on a read unless you set one via the API. `recv()` detects a connection drop by delivering ECONNRESET, not via the receive timeout mechanism. – user207421 May 08 '17 at 04:25
  • I asked about all things that affect the time after which an API call would return with an error. TIME_WAIT is ruled out because it doesn't affect that. Clarified the question. – ivan_pozdeev May 08 '17 at 14:16
  • 1
    Makes no difference. The only thing that controls the time after which a receive times out is the receive timeout set via `setsockopt()`. An internal send timeout can cause a read *error*, but at a time determined by the timeout from the send concerned, not from when the receive was entered. Consider: you do a send, it fails after a few minutes, and an hour later you do a receive. It will fail immediately. Not after waiting for the internal send timeout period, which has already elapsed. Conversely if you did the receive immediately after the send, it will fail after the send times out. – user207421 May 08 '17 at 19:10
  • This is it - a list of both `setsockopt` and these internal timeouts and a rough picture of how they - directly or indirectly - affect call return times (just a general map - so I know what to look for if I require more info). As the question edit says, my primary intent is challenging the statement that blocking socket API calls would block forever unless I take special measures - which sounds very much like a design flaw. – ivan_pozdeev May 08 '17 at 21:38
  • 1
    A blocking read can block forever, as it doesn't do anything to the network, so there is nothing that can fail. You would need to have already done a send that is failing for the read to unblock because of a network problem. This so-called 'design flaw' is (a) actually a design *feature*: see the discussion [here](http://stackoverflow.com/a/10241044/207421), and (b) remediable via SO_RCVTIMEO. – user207421 May 09 '17 at 01:51
  • @ivan_pozdeev: Blocking read always block forever for sockets, pipes etc. Consider your terminal/console. The command prompt waits forever until you enter a command. If it does not do so most **users** (as opposed to programmers) would consider it a design flaw when suddenly the terminal exits by itself after a timeout. Same for a web server, the listening socket waits forever until there is a request otherwise users will be annoyed that stackoverflow is only available until a timeout occurs. For read sockets the situation applies to websockets which waits forever on read. – slebetman May 22 '17 at 22:11