I hope this question qualifies as “a software algorithm” as per Stack Overflow's question guidelines. :)
I have a Java Web Application which, as part of servicing Servlet requests, generates logging events which make callouts to a persistence provider. Each log event slows the response time of the servlet.
pseudocode…
doGet() {
write log < slow
write log < slow
outputStream.write( response )
outputStream.close()
}
There is no need for these log events to occur synchronously. But rather than use a thread pool or some other voodoo, I had an idea to collect up these log events and only make these slower callouts after the servlet has written it’s content. Calling close on the output stream should encourage the servlet container (Tomcat in this case) to write the response right away.
doGet() {
stash log < quick
stash log < quick
outputStream.write( response )
outputStream.close() < response goes to the client naow???
write log < slow
write log < slow
}
The answers to these questions may be container specific as I think they are in the realm of undocumented behaviour.
Is the response going to be flushed to the client after the stream close but before the doGet / doPost method returns (where I am doing this “slow” logging work)? Do Servlet Filter chains have any effect on this behaviour?
Could the next incoming request, on a persistent connection or just from another client, block waiting for the servlet method to return?
So I guess to overall question is
- Does the Servlet Container’s request dispatcher make use of the time between the servlet is called and the response is written, rather than expecting it to return immediately?
And yes, before you suggest it, I should perform some experiments. :)