3

I have a tiny little TCP app where many clients connect to a server, send data to it (via write() - they are also sending message size) and then exit. I have the clients send \0\0 to the server when done sending - and I make it so that if the server gets a zero from read() then it knows something went wrong in the client (like SIGKILL). My question is - is there any programmatic way (some sys call) to notify the server that I am done sending - instead of the server checking always for \0\0 ? Server uses poll() on clients/listening socket to detect if there is something to read/new connection request, btw.

Should I send a signal ? But how do I know which descriptor to stop polling then ?

I read this but the answers there are more or less what I use now

Community
  • 1
  • 1
Mr_and_Mrs_D
  • 32,208
  • 39
  • 178
  • 361
  • What's wrong with your server checking for two nulls? – Edward Thomson Jun 22 '11 at 18:10
  • A common method is for the client to send first a fixed-size header specifying the size of the message it is going to send next, followed by the message itself. Then the server will know from that header how many bytes to read. – Jeremy Friesner Jun 22 '11 at 18:13
  • @jeremy : it will be sending many messages - I want to know which is the last message – Mr_and_Mrs_D Jun 22 '11 at 18:48
  • @Mr_and_Mrs_D: the easiest is probably to design your protocol to have some sort of `QUIT` command. – Bruno Jun 22 '11 at 18:53
  • What is wrong with closing the connection from the client side? Why does the server need an explicit notification of the client being done? – Mike Pennington Jun 23 '11 at 14:00
  • If you are looking for a syscall, shouldn't `close(fd)` or `shutdown(fd,SHUT_WR)` work? – Robᵩ Jun 23 '11 at 16:23
  • @mike @rob : when I read() from a socket 0 bytes I suppose something *went wrong* on the client - would not close() and shutdown() as well as shutting client down result in 0 bytes being read ? – Mr_and_Mrs_D Jun 23 '11 at 18:52
  • 1
    Yes, the server cannot distinguish between a normal and an abnormal `close()`. But, I claim that, even with your protocol, it can't. What if the client crashes after sending the `\0\0`, but before closing the connection? The server can't reasonably know the success/failure of the client, and it ought not try. – Robᵩ Jun 23 '11 at 20:16
  • @Rob - thanks - as long as it sent me my \0\0 it might as well crash - not an issue for me in this case. Good point though – Mr_and_Mrs_D Jun 27 '11 at 11:29

2 Answers2

5

Doing it at the application level (e.g. using \0\0 as you're doing it) is the correct way to do if your protocol is a bit more complex that a single request/response model.

HTTP 1.0, for example, closes the connection straight after a single request/response: the client sends its request command, the server replies with its response and closes the connection.

In protocols where you have a more complex exchange, there are specific commands to indicate the end of a message. SMTP and POP3, for example, are line-delimited. When sending the content of an e-mail via SMTP, you indicate the end of the message using . on a single line (. in the actual message is escaped as ..). You also get commands such as QUIT to indicate you're done.

In HTTP 1.1, the set of request headers is terminated by an empty-line (e.g. GET / HTTP/1.1 + each header on a line + an empty line), so the server knows where the end of the request is. The responses in HTTP 1.1 then use either a Content-Length header (to signal when the end of the response body is going to be) or use chunked transfer encoding which essentially inserts a number of delimiters to indicated whether there's more data coming (usually, it's used when the data size isn't known by the server in advance). (Requests that have a body also use the same headers to indicate when the request ends.)

It's otherwise difficult for the server to know when it's done reading, since it's generally not possible to detect whether a socket is disconnected (or rather, if it's still connected even though the client isn't sending any data). By sending some delimiter or length indicator at the application-level, you avoid this sort of problem (or can detect when there's a problem/timeout).

Community
  • 1
  • 1
Bruno
  • 119,590
  • 31
  • 270
  • 376
  • Thanks for detailed reply - so no way via the API to say "done". I am already sending buffer size so it just now occurred to me I can send 0 after last bytes are send as buffer size - this would avoid checking every time for \0\0. – Mr_and_Mrs_D Jun 22 '11 at 18:56
2

This is done at the application level. In HTTP it is done by closing the socket for a response. Also in HTTP after the server receives two returns, it knows the GET request has finished sending then if there is a content-length header, it knows that the client/server is finished sending after X bytes.

You will need to implement something similar.

Rocky Pulley
  • 22,531
  • 20
  • 68
  • 106
  • "In HTTP it is done by closing the socket for a response"? Not really, you can stream multiple requests in HTTP and keep the connection alive. – Bruno Jun 22 '11 at 18:14