Let's imagine that we have n independent blocking IO tasks e.g. tasks for rest-call to another server. Then all answer we need to combine. Every task can be processing over 10 second.
We can process it sequentially and spent ~n*10 second at the end:
Task1Ans task1 = service1.doSomething(); Task2Ans task2 = service2.doSomething() ... return result;
Another strategy is to process it in parallel manner using CompletableFuture and spent ~ 10 second on all task:
CompletableFuture<Task1Ans> task1Cs = CompletableFuture.supplyAsync(() -> service1.doSomething(), bestExecutor); CompletableFuture<Task2Ans> task2Cs = CompletableFuture.supplyAsync(() -> service2.doSomething(), bestExecutor); return CompletableFuture.allOf(task1Cs, task2Cs) .thenApply(nothing -> { ... // combine task1, task2 into result object return result; }).join();
The second approach has benefits, but I can't understand which type of thread pool is the best for this kind of task:
ExecutorService bestExecutor = Executors.newFixedThreadPool(30) /// or Executors.newCachedThreadPool() or Executors.newWorkStealingPool()
My question is which ExecutorService is best for process n-parallel blocking IO tasks.