0

I try write some simple akka-http and akka-streams based application, that handle http requests, always with one precompiled stream, because I plan to use long time processing with back-pressure in my requestProcessor stream

My application code:

import akka.actor.{ActorSystem, Props}
import akka.http.scaladsl._
import akka.http.scaladsl.server.Directives._
import akka.http.scaladsl.server._
import akka.stream.ActorFlowMaterializer
import akka.stream.actor.ActorPublisher
import akka.stream.scaladsl.{Sink, Source}

import scala.annotation.tailrec
import scala.concurrent.Future

object UserRegisterSource {
  def props: Props = Props[UserRegisterSource]

  final case class RegisterUser(username: String)

}

class UserRegisterSource extends ActorPublisher[UserRegisterSource.RegisterUser] {

  import UserRegisterSource._
  import akka.stream.actor.ActorPublisherMessage._

  val MaxBufferSize = 100
  var buf = Vector.empty[RegisterUser]

  override def receive: Receive = {
    case request: RegisterUser =>
      if (buf.isEmpty && totalDemand > 0)
        onNext(request)
      else {
        buf :+= request
        deliverBuf()
      }
    case Request(_) =>
      deliverBuf()
    case Cancel =>
      context.stop(self)
  }

  @tailrec final def deliverBuf(): Unit =
    if (totalDemand > 0) {
      if (totalDemand <= Int.MaxValue) {
        val (use, keep) = buf.splitAt(totalDemand.toInt)
        buf = keep
        use foreach onNext
      } else {
        val (use, keep) = buf.splitAt(Int.MaxValue)
        buf = keep
        use foreach onNext
        deliverBuf()
      }
    }
}

object Main extends App {
  val host = "127.0.0.1"
  val port = 8094

  implicit val system = ActorSystem("my-testing-system")
  implicit val fm = ActorFlowMaterializer()
  implicit val executionContext = system.dispatcher

  val serverSource: Source[Http.IncomingConnection, Future[Http.ServerBinding]] = Http(system).bind(interface = host, port = port)

  val mySource = Source.actorPublisher[UserRegisterSource.RegisterUser](UserRegisterSource.props)
  val requestProcessor = mySource
    .mapAsync(1)(fakeSaveUserAndReturnCreatedUserId)
    .to(Sink.head[Int])
    .run()

  val route: Route =
    get {
      path("test") {
        parameter('test) { case t: String =>
          requestProcessor ! UserRegisterSource.RegisterUser(t)

          ???
        }
      }
    }

  def fakeSaveUserAndReturnCreatedUserId(param: UserRegisterSource.RegisterUser): Future[Int] =
    Future.successful {
      1
    }

  serverSource.to(Sink.foreach {
    connection =>
      connection handleWith Route.handlerFlow(route)
  }).run()
}

I found solution about how create Source that can dynamically accept new items to process, but I can found any solution about how than obtain result of stream execution in my route

  • We don’t have that exposed in a reusable fashion right now, but `Http().singleRequest` is implemented in a way that expresses what you want, take a look at https://github.com/akka/akka/blob/release-2.3-dev/akka-http-core/src/main/scala/akka/http/impl/engine/client/PoolGateway.scala and surrounding sources for inspiration. – Roland Kuhn Jun 15 '15 at 10:10

1 Answers1

2

The direct answer to your question is to materialize a new Stream for each HttpRequest and use Sink.head to get the value you're looking for. Modifying your code:

val requestStream = 
  mySource.map(fakeSaveUserAndReturnCreatedUserId)
          .to(Sink.head[Int]) 
          //.run() - don't materialize here

val route: Route =
  get {
    path("test") {
      parameter('test) { case t: String =>
        //materialize a new Stream here
        val userIdFut : Future[Int] = requestStream.run()

        requestProcessor ! UserRegisterSource.RegisterUser(t)

        //get the result of the Stream
        userIdFut onSuccess { case userId : Int => ...}
      }
    }
  }

However, I think your question is ill posed. In your code example the only thing you're using an akka Stream for is to create a new UserId. Futures readily solve this problem without the need for a materialized Stream (and all the accompanying overhead):

val route: Route =
  get {
    path("test") {
      parameter('test) { case t: String =>
        val user = RegisterUser(t)

        fakeSaveUserAndReturnCreatedUserId(user) onSuccess { case userId : Int =>
          ...
        }
      }
    }
  }

If you want to limit the number of concurrent calls to fakeSaveUserAndReturnCreateUserId then you can create an ExecutionContext with a defined ThreadPool size, as explained in the answer to this question, and use that ExecutionContext to create the Futures:

val ThreadCount = 10 //concurrent queries

val limitedExecutionContext =
  ExecutionContext.fromExecutor(Executors.newFixedThreadPool(ThreadCount))

def fakeSaveUserAndReturnCreatedUserId(param: UserRegisterSource.RegisterUser): Future[Int] =
Future { 1 }(limitedExecutionContext)
Community
  • 1
  • 1
Ramón J Romero y Vigil
  • 17,373
  • 7
  • 77
  • 125
  • Hi, If correct understand call `requestStream.run()` on every request back pressure mechanism of akka-streams will not be used, because every run will go in different instances of stream? – greenhost87 Nov 05 '15 at 13:39
  • There is no other solution. You cannot have multiple consumers (Sinks) connected to the same Source because they'll each process the data at different rates and therefore have unique back pressures. The only thing you need back pressure for is to limit the number of concurrent UserId creations, which can be done with a limited ExecutionContext as explained in the answer. – Ramón J Romero y Vigil Nov 05 '15 at 13:58