1

Possible Duplicate:
How to check if socket is closed in Boost.Asio?

Is there an established way to determine whether the other end of a TCP connection is closed in the asio framework without sending any data?

Using Boost.asio for a server process, if the client times out or otherwise disconnects before the server has responded to a request, the server doesn't find this out until it has finished the request and generated a response to send, when the send immediately generates a connection-aborted error.

For some long-running requests, this can lead to clients canceling and retrying over and over, piling up many instances of the same request running in parallel, making them take even longer and "snowballing" into an avalanche that makes the server unusable. Essentially hitting F5 over and over is a denial-of-service attack.

Unfortunately I can't start sending a response until the request is complete, so "streaming" the result out is not an option, I need to be able to check at key points during the request processing and stop that processing if the client has given up.

Community
  • 1
  • 1
Tim Sylvester
  • 22,897
  • 2
  • 80
  • 94

3 Answers3

2

Just check for boost::asio::error::eof error in your async_receive handler. It means the connection has been closed.

dongle26
  • 826
  • 1
  • 10
  • 18
2

The key to this problem is to avoid doing request processing in the receive handler. Previously, I was doing something like this:

async_receive(..., recv_handler)

void recv_handler(error) {
    if (!error) {
        parse input
        process input
        async_send(response, ...)

Instead, the appropriate pattern is more like this:

async_receive(..., recv_handler)

void async_recv(error) {
    if (error) {
        canceled_flag = true;
    } else {
        // start a processing event
        if (request_in_progress) {
            capture input from input buffer
            io_service.post(process_input)
        }
        // post another read request
        async_receive(..., recv_handler)
    }
}

void process_input() {
    while (!done && !canceled_flag) {
        process input
    }
    async_send(response, ...)
}

Obviously I have left out lots of detail, but the important part is to post the processing as a separate "event" in the io_service thread pool so that an additional receive can be run concurrently. This allows the "connection aborted" message to be received while processing is in progress. Be aware, however, that this means two threads must necessarily communicate with each other requiring some kind of synchronization and the input that's being processed must be kept separately from the input buffer into which the receive call is being placed, since more data may arrive due to the additional read call.

edit:

I should also note that, should you receive more data while the processing is happening, you probably do not want to start another asynchronous processing call. It's possible that this later processing could finish first, and the results could be sent to the client out-of-order. Unless you're using UDP, that's likely a serious error.

Here's some pseudo-code:

async_read (=> read_complete)
read_complete
    store new data in queue
    if not currently processing
         if a full request is in the queue
             async_process (=> process_complete)
    else ignore data for now
    async_read (=> read_complete)
async_process (=> process_complete)
    process data
process_complete
    async_write_result (=> write_complete)
write_complete
    if a full request is in the queue
        async_process (=> process_complete)

So, if data is received while a request is in process, it's queued up but not processed. Once processing completes and the result is sent, then we may start processing again with the data that was received earlier.

This can be optimized a bit more by allowing processing to occur while the result of the previous request is being written, but that requires even more care to ensure that the results are written in the same order as the requests were received.

Tim Sylvester
  • 22,897
  • 2
  • 80
  • 94
  • In this case process_input() isn't it running in the same thread as the receive handler ? (async_recv?) – Ghita Feb 02 '12 at 13:49
  • @Ghita In my second example, `process_input()` is executed by a thread in the `io_service` thread pool which may or may not be the same thread which ran `async_recv()`. The key is that, wherever it runs, another thread in the pool has the opportunity to receive a second `async_recv()` (which may have additional data or indicate an error) while the processing is still running. – Tim Sylvester Feb 02 '12 at 17:58
  • So basically in initial configuration you were getting at a point pieces of data from client using async_receive() and after each piece you had to "pause" for processing that data and after that receiving some more (by calling more async_receive() I guess) In the second configuration you can run them both in parallel with careful synchronization as you said. – Ghita Feb 02 '12 at 18:12
  • Were the processing "process input" taking a long time before and is the new method detecting socket close efficiently ? I mean in process_input() in the while you would have to split the work in small pieces enough so that you can see the close() coming and throwing work done for that client. – Ghita Feb 02 '12 at 18:16
  • Yes, I think you've got the idea. The processing must have some "checkpoints" where you can test whether the client is still waiting for this to make sense. – Tim Sylvester Feb 03 '12 at 00:18
1

If the connection has went through an orderly shutdown, i.e. the client called close or shutdown on the socket, then you can do a non-blocking one byte read from the socket to determine if it's still connected:

int ret = recv(sockfd, buf, 1, MSG_DONTWAIT | MSG_PEEK);
  1. If it's connected but there's no data in the buffer you'll get a return of -1 with errno == EAGAIN
  2. If it's connected and there's data you'll get back 1, and the MSG_PEEK flag will leave the data in the socket buffer.
  3. otherwise ret will equal 0 indicating a graceful shutdown of the connection.

Now this technique isn't full proof, but it's work a significant portion of the time, as long as the FIN packet from the client has arrived.

You should be able to adapt this for use with Boost.Asio as long as it lets you pass socket flags to it's recv function.

Robert S. Barnes
  • 39,711
  • 30
  • 131
  • 179
  • 1
    ASIO is a portable asynchronous system that does not expose any of the Berkeley sockets APIs like *recv()* or its flags. In addition, there's no point in starting another read while in the handler for a read because it is protected by a "strand." Nevertheless, I believe this is the correct direction and I've got the beginnings of a solution based on the same idea, which I will post when I get it nailed down. – Tim Sylvester May 31 '10 at 06:11
  • Yeck, that's why I always just use the Berkeley sockets API directly whatever language I happen to be programming in. These various wrapper libraries and API's take away too much control in the process of trying to make things simple. – Robert S. Barnes May 31 '10 at 08:11
  • 2
    Unfortunately the Berkeley sockets API does not scale very well. ASIO is far more than a wrapper around a simple API, as it takes advantage of kqueue on OSX, IOCP on Win32, and epoll on Linux, not to mention supporting uniform asynchronous I/O on files, pipes, etc.. Especially IOCP is quite difficult to do manually, and cleanly implementing all three with threading is a daunting task. – Tim Sylvester Jun 01 '10 at 16:51
  • For those who would need to do that, I did it using asio socket `native_handle()` method – deepskyblue86 Jul 19 '17 at 17:06