How could actors be used in complex back-end services, that are made of receiving an initial request, doing some processing, then sending some requests/things-to-do to other services, waiting for responses from some of them, and deciding how to continue based of the responses, and so on, until we calculate the final result?
Trying to use actors for implementing such services raises the question - What parts of this workflow should be implemented by actors, and what parts should not?
Actor instances do not release the threads they are using until the task they are trying to accomplish is completed (unless we delegate it to a Future), so it looks to me like writing the whole workflow as a hierarchy of smaller and smaller actors, with N layers of children, would be on one hand the ideal design based on the actor concept, but on the other hand it would keep up to N-1 threads (per initial request) - constantly locked doing nothing but waiting on one of the bottom actors in the hierarchy to complete. Specifically, the topmost actor in the hierarchy will be idle most of the time.
This hierarchical design also matches the error kernel pattern.
But this sounds like very bad for concurrency.
And even if you tried to keep this hierarchy of actors, but wrap the calls to each actor in a Future (by asking the child actors, rather than telling), so locking would be much shorter (although it would still exist) - still working with so many Futures makes it impossible to work with stateful actors, or actors who modify the state of the system in ways that are not synchronized by themselves (i.e. not simply modifying a database, which at least for simple requests is done automatically through a database transaction - but rather modifying some global variable in the application).
So should actors be used only in the lower levels of the workflow?
Or should a workflow which is mostly hierarchical (and most are) be rewritten in a more serial way, so it won't require so many levels child actors (but that would be quite hard and unnatural, and seems like giving the implementation framework too much influence over the design).
That would mean that you recognize that actors are quite limited, and that failure tolerance should be mainly handled by traditional exception handling, and in any case that the desire to have a failure-tolerant application should not have so much influence over the design of the application. In that case, why bothering with actors? Working with a framework that requires reading the implementation of every actor in order to understand the workflow of the complex spaghetti-like knot of messages, is much less easier than working with frameworks that are based hierarchical structure that can be revealed to any desired extent by looking at the method signatures, without necessarily peering into their implementation.
A lot of the benefits of actors, such as easy scaling out, seem to be less relevant or practical, if only a small part of the application is implemented by actors.