I have the following code, abbreviated here:
public final void doPost(HttpServletRequest request,
HttpServletResponse response) {
int itemCount = itemsToGetFromCache.size();
ExecutorService service = null;
List<Future<?>> futures = null;
service = Executors.newFixedThreadPool(itemCount);
futures = new ArrayList<Future<?>>();
for (int i=0; i<itemCount; i++)
{
final int j = i;
Future<?> f = service.submit(new Callable<Void>() {
@Override
public Void call() throws Exception {
getItemFromRemoteCacheIfAvailableAndStoreInMemory(itemsToGetFromCache(j));
return null;
}});
futures.add(f);
}
// wait for all tasks to complete before continuing
for (Future<?> f : futures)
{
try {
f.get();
} catch (Exception e) {
//handle exception
}
}
}
It's running in a Tomcat 7. The typical item size is 30 and typical simultaneous users is 200. Some have warned that this could cause the server threads to be maxed out and connections to be denied. Note, that the calls to the remove cache will be brief most often, taking around 60 millis.
Basically, I'm just trying to make the calls to cache faster by running them in parallel. If the results are not in cache, they'll be pulled from a database and cached subsequently.
Is there a problem here? I don't think the server connection pool size is linked to the maximum number of threads the server can handle when spawning them this way. Am I right in assuming this? Are there any other concerns?