-1

Disclaimer: this is not the same as this one.

There are some tasks that can consume both network and CPU. Sometimes performance sticks into a network, sometimes into the CPU. So depending on that I need different thread number that can not be precisely set before a program run. Ideally, each task should log its time (and some other parameters) and if time increases -> thread pool decreases threads otherwise when a task is completed faster -> number of threads should be increasing.

Of course, there should be protection from continuous load balancing: e.g. when the system adds 1 thread and remove it, then repeat this again and again because task time changes.

Does java offer something similar to this?

Cherry
  • 31,309
  • 66
  • 224
  • 364
  • 1
    Not sure I'm following your logic about changing the pool size. The ones I'm familiar with are fixed by configuration at runtime and don't change. It's up to you to tune and monitor your app and determine thread size. Something like Netty and its ring buffer can be a good way to manage high throughput. It uses a single thread and non-blocking I/O. Scales like mad. Maybe you should look at it. – duffymo Jun 10 '22 at 23:16
  • 1
    Have you read the documentation on [ThreadPoolExecutor](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/concurrent/ThreadPoolExecutor.html)? – Louis Wasserman Jun 10 '22 at 23:53
  • 2
    What do you mean by "performance sticks into a network"? – Basil Bourque Jun 11 '22 at 03:26

1 Answers1

1

Currently in Java, threads are quite expensive, being mapped directly one-to-one to host OS threads. They impact both memory usage and CPU usage. So we usually limit the number of threads to a few, often roughly the number of cores or so. So increasing/decreasing the size of the thread pool is not likely to make sense nowadays.

Virtual threads in Project Loom

Sounds like your scenario is ideal for the virtual threads (fibers) coming from the Project Loom project. Many virtual threads are mapped to a single host OS thread.

In today's Java threading, if the Java code blocks, the host OS thread blocks. No further work is performed on that thread. With virtual threads, when the Java code blocks the virtual thread is dismounted from its assigned host OS thread, and “parked”. When that code eventually returns, and therefore needs further execution, it is mounted onto another host OS thread. This parking and mounting of virtual threads is much faster than blocking/unblocking of host OS threads. Virtual threads have much less impact on memory and and CPU. So we can have thousands, or even millions, of threads running simultaneously on conventional hardware.

In your scenario where your work load may go up or down, the virtual thread facility in Project Loom will automatically manage the thread pool and scheduling. You should see vastly improved throughput with no effort on your part.

A few caveats:

  • Cheap threads can still do expensive things. So you may need to manage or limit your particular concurrent tasks to avoid blowing out memory or overburdening other resources.
  • Virtual threads only make sense for tasks that block. That means most common Java work. But for entirely CPU bound tasks such as video encoding/decoding with little to no logging, storage I/O, network I/O, etc. you would stick with conventional Java threads.
  • There may be some situations where the particular content in your task may prevent the parking while blocked. You may choose to alter your code a bit to enable the virtual thread from being “pinned” to the host OS thread. This may be especially the case with the initial releases of Loom. This situation is fluid right now in pre-release Loom, so we will need to stay informed as to changes.

Virtual threads and other Project Loom features are available as preview features in Java 19, with experimental builds available now.

For more information, see the articles, presentations, and interviews by members of the Project Loom team such as Ron Pressler and Alan Bateman.

Basil Bourque
  • 303,325
  • 100
  • 852
  • 1,154