0

I noticed that some web frameworks such as Play Framework allows you to configure multiple thread-pools with different sizes (num of threads within it). Let's say we run this play within a single machine with single core. Wouldn't there be a huge overhead by having multiple thread-pools?

For example, smaller thread pool is assuming asynchronous operations vs large thread-pool indicate a lot of blocking calls so threads can context-switch. Both cases is assuming that parallelism factor based on number of cores are in machine. My concern is that processor is further shared. How does this work?

Thanks!

Bilbo Baggins
  • 2,899
  • 10
  • 52
  • 77
user_1357
  • 7,766
  • 13
  • 63
  • 106

1 Answers1

2

Play certainly allows you to configure multiple execution contexts (the equivalent of a thread pool), but that does not mean that you should do it, especially if you have a machine with a single core. By default the configuration should be kept low (close to the number of cores) for high-throughput - assuming, of course, that the operations are all non-blocking. If you have blocking operations the idea is to have them run on a separate execution context, as they otherwise lead to the exhaustion of the default request processing ExecutionContext (the request processing pipeline in Play runs on the default ExecutionContext, which is by default limited to a small number of threads).

As to what happens when you have more threads than cores and what happens when you do so highly depends on the operations you're running (in regards to I/O, etc.). One thread per core is supposedly optimal if you only do CPU-bound operations. See also this question.

Community
  • 1
  • 1
Manuel Bernhardt
  • 3,135
  • 2
  • 29
  • 36
  • Could you elaborate on "If you have blocking operations the idea is to have them run on a separate execution context so that calls do not get in the way of the main HTTP request processing pipeline." What exactly is "request processing pipeline"? Thanks – user_1357 May 22 '14 at 15:07