1

It seems that the example uses of the Spray HTTP server make it painfully easy to make the server process requests sequentially instead of concurrently. This is true because the examples show the routing object implemented as an actor that processes one request at a time (facepalm? ** ). This seems to be a common problem.

For example, below, accessing /work1 processes the request asynchronously, but for /work2 we unfortunately block ALL other request (let's assume, for example that /work2 needs to get busy authenticating a token from the cookie in a database).

Is there a way use spray.routing where execution is forked before getting to the routing?

import akka.actor.ActorSystem
import spray.http.{MediaTypes, HttpEntity}
import spray.routing.SimpleRoutingApp
import scala.concurrent.Future

class MySimpleServer(val system: ActorSystem, val HOST: String, val PORT: Int) extends SimpleRoutingApp {

  implicit val _system: ActorSystem = system
  import _system.dispatcher

  def main(args: Array[String]): Unit = {
    startServer(interface = HOST, port = PORT) {
      get {
        path("work1") {
          complete {
            // Asynchronously process some work
            Future.apply {
              Thread.sleep(1000)
              HttpEntity(
                MediaTypes.`text/html`,
                "OK"
              )
            }
          }
        } ~
          path("work2") {
            complete {
              // Synchronously process some work and block all routing for this Actor.
              // Oh sh*t!
              Thread.sleep(1000)
              HttpEntity(
                MediaTypes.`text/html`,
                "OK"
              )
            }
          } 
      }
    }
  }
}

** since routing is typically a stateless operation there doesn't seems to be a benefit to making the router and Actor, right?

For every other webserver I've used, forking control of the connection to a handler process or thread more sensibly (IMO) happens almost immediately after accepting a TCP connection. (I think) this maximizes the rate at which connections can be received and minimizes the risk on unintentional blocking -- at least, completely avoids unintentional blocking in routing.


Update:

As @rahilb suggested

  detach() {
    get {...} ..
 }

and calling as:

 val responses = (0 until 10)
    .map { _ => (IO(Http) ? HttpRequest(GET, s"${TEST_BASE_URL}work1")).mapTo[HttpResponse] }
    .map { response => Await.result(response, 5 seconds) }

... still takes around >3 seconds for either work1 or work2.

Community
  • 1
  • 1
user48956
  • 14,850
  • 19
  • 93
  • 154

1 Answers1

2

Actually even your work2 route has the potential to starve the HTTP Actor, as the ExecutionContext used in Future.apply is system.dispatcher, i.e. the spray HttpServiceActor's context. We can provide a different ExecutionContext for long running futures so we do not risk starving spray's one.

To answer your question though, there is a directive called detach that will run the rest of the route in some ExecutionContext, potentially leaving more resources free to incoming requests... but as it is a directive forking only occurs after the route is hit.

rahilb
  • 726
  • 3
  • 15
  • That seemed to help, but still, if I execute 10 requests, the total takes >3 seconds (when calling either work1 or work2). Have updated the question with my workings. – user48956 Nov 10 '15 at 17:41