The key to this problem is to avoid doing request processing in the receive handler. Previously, I was doing something like this:
async_receive(..., recv_handler)
void recv_handler(error) {
if (!error) {
parse input
process input
async_send(response, ...)
Instead, the appropriate pattern is more like this:
async_receive(..., recv_handler)
void async_recv(error) {
if (error) {
canceled_flag = true;
} else {
// start a processing event
if (request_in_progress) {
capture input from input buffer
io_service.post(process_input)
}
// post another read request
async_receive(..., recv_handler)
}
}
void process_input() {
while (!done && !canceled_flag) {
process input
}
async_send(response, ...)
}
Obviously I have left out lots of detail, but the important part is to post the processing as a separate "event" in the io_service thread pool so that an additional receive can be run concurrently. This allows the "connection aborted" message to be received while processing is in progress. Be aware, however, that this means two threads must necessarily communicate with each other requiring some kind of synchronization and the input that's being processed must be kept separately from the input buffer into which the receive call is being placed, since more data may arrive due to the additional read call.
edit:
I should also note that, should you receive more data while the processing is happening, you probably do not want to start another asynchronous processing call. It's possible that this later processing could finish first, and the results could be sent to the client out-of-order. Unless you're using UDP, that's likely a serious error.
Here's some pseudo-code:
async_read (=> read_complete)
read_complete
store new data in queue
if not currently processing
if a full request is in the queue
async_process (=> process_complete)
else ignore data for now
async_read (=> read_complete)
async_process (=> process_complete)
process data
process_complete
async_write_result (=> write_complete)
write_complete
if a full request is in the queue
async_process (=> process_complete)
So, if data is received while a request is in process, it's queued up but not processed. Once processing completes and the result is sent, then we may start processing again with the data that was received earlier.
This can be optimized a bit more by allowing processing to occur while the result of the previous request is being written, but that requires even more care to ensure that the results are written in the same order as the requests were received.