5

We have a Jetty webapp with a custom threadpool backed by Java19 virtual threads.

The business logic we run in response to a request is usually IO-bound (e.g. DB queries), so virtual threads have been a great win for us, allowing us to have many more IO-bound requests in-flight at once than would be possible using platform threads, while avoiding writing explicitly async code.

But some of our requests have CPU-bound computation sections. And if enough requests happen to be running CPU-bound code at once, our whole webapp will lock up and become unresponsive to new requests until one of these requests resolves.

Java19's virtual-threading support is apparently implemented by having all virtual threads schedule onto a single JVM-global bounded-size ForkJoinPool backed by N underlying carrier threads.

This means that, if I start many virtual threads — and at least N of these threads have some long-running CPU-bound operation as part of them — then as soon as these N threads reach this CPU-bound part, the entire JVM-global virtual thread pool will lock up / become blocked, since all available carrier threads are in use by one of the virtual threads running CPU-bound code.

What would be a best-practice, in designing an app that makes use of virtual threads as the top-level concurrency mechanism, if it has mixed IO-bound/CPU-bound workloads like this?

tsutsu
  • 63
  • 3
  • 1
    The lack of preemptive task switching between virtual threads may add fuel to the flames but generally, every server will become unresponsive “if enough requests happen to be running CPU-bound code at once”. But submitting CPU intense jobs to any thread pool of your choice still is possible the same way as before. The great thing about the virtual threads is that you can simply `join()` on the task (or use any other form of blocking wait), instead of having to wrench your code into `then…` chains. – Holger Mar 17 '23 at 11:46

0 Answers0