Here's my network scenario: a server (implemented with boost asio) is receiving requests, process them, then pass to several hosts (using http client also implemented with asio), waiting for answers, process that answers, then replying to original request.
The problem is that hosts are granted with time limit, server must respect it while waiting for answers (i.e. if limit is 100ms, then server can't close connection before 100ms passed).
Since my server is using worker thread pool (each thread is running boost::asio::io_service::run()
), in case of slow hosts blocking wait for answers becomes bottleneck very soon (i.e. all workers are busy waiting, no more requests can be served).
Here's server handler code (mostly skipped)
void connection::handle(asio::yield_context yield)
{
// read request
asio::async_read_until(socket_, request_buf, "\r\n", yield);
// read headers
asio::async_read_until(socket_, request_buf, "\r\n\r\n", yield);
// handle request
// THIS IS BLOCKING CALL
request_handler_->handle_request(request_, reply_);
// write reply
asio::async_write(socket_, reply_.to_buffers(), yield[ec]);
}
And here's what handle_request does
http::client::client c(host, port);
// this is asynchronous call returning immediately
std::future<http::reply> res = c.send(request);
// BOTTLENECK PROBLEM HERE
// this is blocking wait for answer
auto reply = res->get();
First of all I can rewrite handle_request()
in async manner like shown here: https://stackoverflow.com/a/26728121 (that will not solve the problem, but can be useful furthermore). I can also pass callback to client::send()
to avoid getting response explicitly.
I want somehow to "yield" that blocking wait until all responses are received, so workers will become free to serve other incoming requests in this period.
I tried boost::coroutine
and boost::fiber
, but w/o success, indeed these two just pass execution context, but I have no place to pass context, I just need to "wait".